Introduction

1. What is the source of your dataset?

The dataset I considered is taken from the UCI machine learning repository.

2. What is the context in which it was collected?

The dataset was collected using analyzing the different case studies and categorizing the diseases according to the nature of their intensity. The dataset is collected from 270 participants of different age groups, genders, etc. by the use of health tracking devices connected via IoT (Internet of things)

In this dataset, we have different predictor variables for scrutinizing and predicting the different heart-related diseases. These are measured across the 11 different parameters. That includes age, sex, chest pain type,BP, cholesterol,and FBS.over 120,EKG results, Max.HR, exercise. Enigma, ST depression, number of vessels Fluro, thallium, and heart disease.

3. Do you have permission to share the dataset and its analyses?

Since this an opensource dataset I have the permission to share and analyse the dataset.

4. What approach was used to compile this dataset?

The approach I followed is the linear regression approach where I considered the different number of predictor variables in 4 different models. The first model I considered is a full model where it has all the necessary values and omits the rest. The second model that I considered is the predictor variables that I am most interested in. The third model contains the filtered predictor variables from the 2nd model that has a significant relation with the heartbeat.The fourth model contains the variables among the three models that gave a significant relationship to the human heart rate (Max. HR).

5. Present details of the variables in your dataset (include both the original variables and any new variables you create via transformation and/or combination of the original dataset. Include details of the transformation and computations in the description column.

SNo Name Description measurement type Role
1 Age Age of the participant(in years) Numeric Predictor
2 Sex Gender of the participant Categorical Predictor
3 Blood pressure The value of the resting blood pressure. Numeric Predictor
4 Cholestrol The serum cholesterol level in the body .Measured by mg/dl Numeric Predictor
5 Chest pain type

1-typical angina

2-Atypical angina

3-non-anginal pain

4-asymptotic

Categorical Outcome
6 FBS over 120 Blood sugar level numeric Outcome
7 ST depression The level of internal depression of the participant Numeric Outcome
8 Number of vessels The number of vessels in the heart narrowing down Numeric Outcome
9 The allium

The allium heart related disease categorized by

3-normal

4-fined effect

7-reversible efffect

Numeric Outcome
10 Max.HR The maximum human heart rate Numeric Predictor
11 Heart disease The presence or absence of heart disease Categorical Outcome
  1. Provide a list of two-three research questions. These should require non-summary type of analyses, via building appropriate models and interpreting their results, to be answered.

The problem statement that I am going to mainly address is ’How the data collected by IoMT devices specifically heart rate can be used to predict the diseases in the human body before they mature, and cause severe illnesses,

The questions associated with this problem statement are :

Conceptual model

  1. Begin by providing a data model, indicating, on the basis of the nature of the outcome variable, the specific type of GLM you intend to use.

The models that are going to scrutinize are four different models one all of the predictors included, two my preferred predictor variables, three the most accurate predictors model 3, and the most significant predictors among the three models.

Y|β0,β1,β2 with the standard deviation sigma.

μ=β0+β1X1+β2X2+ β3X3+β4X4

with Y=MAX.HR(maximum heart rate )

2) Provide a specification of the prior PDFs for each of the model parameters.

The prior probability density functions for all the four models are the range the of the heart rate .The range of the Max.H.R can be determined by using summary function on the dataset I selected .

The listing hypothesis for each of the model parameters in their null form are:

H0-For my first model I am testing is, there is no relation between the age and sex parameters with the Max.HR.

HO1-For the second model I am testing is, there is no significant relation with the Blood pressure(BP) with the Max.HR.

H02For the third model I am testing id the there is no association with the cholesterol and the MAX.HR

4)Provide a tuning for the prior PDFs’ parameter values. This will require you to derive these values from prior knowledge and null hypothesis statements.

The prior values I considered are the range if the human heart rate .The tuning of the models are been done by analyzing the summary function of the first model.The first model contains all the predictor variables and the turning of the model are modified accordingly based on the summary function of the first model.

β0-N(71,65.5)

β1~N(0,1)

library(bayesrules)
library(tidyverse)
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.1 ──
## ✔ ggplot2 3.3.6     ✔ purrr   0.3.4
## ✔ tibble  3.1.7     ✔ dplyr   1.0.9
## ✔ tidyr   1.2.0     ✔ stringr 1.4.0
## ✔ readr   2.1.2     ✔ forcats 0.5.1
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag()    masks stats::lag()
library(bayesplot)
## This is bayesplot version 1.9.0
## - Online documentation and vignettes at mc-stan.org/bayesplot
## - bayesplot theme set to bayesplot::theme_default()
##    * Does _not_ affect other ggplot2 plots
##    * See ?bayesplot_theme_set for details on theme setting
library(rstanarm)
## Loading required package: Rcpp
## This is rstanarm version 2.21.3
## - See https://mc-stan.org/rstanarm/articles/priors for changes to default priors!
## - Default priors may change, so it's safest to specify priors, even if equivalent to the defaults.
## - For execution on a local, multicore CPU with excess RAM we recommend calling
##   options(mc.cores = parallel::detectCores())
library(broom.mixed)
library(tidybayes)
library(GGally)
## Registered S3 method overwritten by 'GGally':
##   method from   
##   +.gg   ggplot2
dataset <- read.csv("~/Downloads/Heart_Disease_Prediction.csv", header=TRUE)

Loaded the required data set by specifying the exact path required and the header is set as true.

view(dataset)
summary(dataset)
##       Age             Sex         Chest.pain.type       BP       
##  Min.   :29.00   Min.   :0.0000   Min.   :1.000   Min.   : 94.0  
##  1st Qu.:48.00   1st Qu.:0.0000   1st Qu.:3.000   1st Qu.:120.0  
##  Median :55.00   Median :1.0000   Median :3.000   Median :130.0  
##  Mean   :54.43   Mean   :0.6778   Mean   :3.174   Mean   :131.3  
##  3rd Qu.:61.00   3rd Qu.:1.0000   3rd Qu.:4.000   3rd Qu.:140.0  
##  Max.   :77.00   Max.   :1.0000   Max.   :4.000   Max.   :200.0  
##   Cholesterol     FBS.over.120     EKG.results        Max.HR     
##  Min.   :126.0   Min.   :0.0000   Min.   :0.000   Min.   : 71.0  
##  1st Qu.:213.0   1st Qu.:0.0000   1st Qu.:0.000   1st Qu.:133.0  
##  Median :245.0   Median :0.0000   Median :2.000   Median :153.5  
##  Mean   :249.7   Mean   :0.1481   Mean   :1.022   Mean   :149.7  
##  3rd Qu.:280.0   3rd Qu.:0.0000   3rd Qu.:2.000   3rd Qu.:166.0  
##  Max.   :564.0   Max.   :1.0000   Max.   :2.000   Max.   :202.0  
##  Exercise.angina  ST.depression   Slope.of.ST    Number.of.vessels.fluro
##  Min.   :0.0000   Min.   :0.00   Min.   :1.000   Min.   :0.0000         
##  1st Qu.:0.0000   1st Qu.:0.00   1st Qu.:1.000   1st Qu.:0.0000         
##  Median :0.0000   Median :0.80   Median :2.000   Median :0.0000         
##  Mean   :0.3296   Mean   :1.05   Mean   :1.585   Mean   :0.6704         
##  3rd Qu.:1.0000   3rd Qu.:1.60   3rd Qu.:2.000   3rd Qu.:1.0000         
##  Max.   :1.0000   Max.   :6.20   Max.   :3.000   Max.   :3.0000         
##     Thallium     Heart.Disease     
##  Min.   :3.000   Length:270        
##  1st Qu.:3.000   Class :character  
##  Median :3.000   Mode  :character  
##  Mean   :4.696                     
##  3rd Qu.:7.000                     
##  Max.   :7.000
str(dataset)
## 'data.frame':    270 obs. of  14 variables:
##  $ Age                    : int  70 67 57 64 74 65 56 59 60 63 ...
##  $ Sex                    : int  1 0 1 1 0 1 1 1 1 0 ...
##  $ Chest.pain.type        : int  4 3 2 4 2 4 3 4 4 4 ...
##  $ BP                     : int  130 115 124 128 120 120 130 110 140 150 ...
##  $ Cholesterol            : int  322 564 261 263 269 177 256 239 293 407 ...
##  $ FBS.over.120           : int  0 0 0 0 0 0 1 0 0 0 ...
##  $ EKG.results            : int  2 2 0 0 2 0 2 2 2 2 ...
##  $ Max.HR                 : int  109 160 141 105 121 140 142 142 170 154 ...
##  $ Exercise.angina        : int  0 0 0 1 1 0 1 1 0 0 ...
##  $ ST.depression          : num  2.4 1.6 0.3 0.2 0.2 0.4 0.6 1.2 1.2 4 ...
##  $ Slope.of.ST            : int  2 2 1 2 1 1 2 2 2 2 ...
##  $ Number.of.vessels.fluro: int  3 0 0 1 1 0 1 1 2 3 ...
##  $ Thallium               : int  3 7 7 7 3 7 6 7 7 7 ...
##  $ Heart.Disease          : chr  "Presence" "Absence" "Presence" "Absence" ...

1)The view function is used to view the contents in the file dataset

2)The summary function is used to summarize the contents in the dataset file.

3)Str function specifies the string name from each and every variables.

dataset1=dataset%>%select(Age,BP,FBS.over.120,Cholesterol,Chest.pain.type,ST.depression,Number.of.vessels.fluro,Thallium,Sex,Max.HR)

Select function is used to select the required columns from the dataset .In the above function the selected rows are transferred to the new file called dataset1

ggpairs(dataset1)

GGpairs helps to find the correlation between the contents present in the dataset1.In the above graph Max.HR is negatively correlated with most of the predictor variables while with the exception with the blood sugar level.

dataset1 %>% na.omit(dataset1)
summary(dataset1)
##       Age              BP         FBS.over.120     Cholesterol   
##  Min.   :29.00   Min.   : 94.0   Min.   :0.0000   Min.   :126.0  
##  1st Qu.:48.00   1st Qu.:120.0   1st Qu.:0.0000   1st Qu.:213.0  
##  Median :55.00   Median :130.0   Median :0.0000   Median :245.0  
##  Mean   :54.43   Mean   :131.3   Mean   :0.1481   Mean   :249.7  
##  3rd Qu.:61.00   3rd Qu.:140.0   3rd Qu.:0.0000   3rd Qu.:280.0  
##  Max.   :77.00   Max.   :200.0   Max.   :1.0000   Max.   :564.0  
##  Chest.pain.type ST.depression  Number.of.vessels.fluro    Thallium    
##  Min.   :1.000   Min.   :0.00   Min.   :0.0000          Min.   :3.000  
##  1st Qu.:3.000   1st Qu.:0.00   1st Qu.:0.0000          1st Qu.:3.000  
##  Median :3.000   Median :0.80   Median :0.0000          Median :3.000  
##  Mean   :3.174   Mean   :1.05   Mean   :0.6704          Mean   :4.696  
##  3rd Qu.:4.000   3rd Qu.:1.60   3rd Qu.:1.0000          3rd Qu.:7.000  
##  Max.   :4.000   Max.   :6.20   Max.   :3.0000          Max.   :7.000  
##       Sex             Max.HR     
##  Min.   :0.0000   Min.   : 71.0  
##  1st Qu.:0.0000   1st Qu.:133.0  
##  Median :1.0000   Median :153.5  
##  Mean   :0.6778   Mean   :149.7  
##  3rd Qu.:1.0000   3rd Qu.:166.0  
##  Max.   :1.0000   Max.   :202.0
dataset1 %>% na.omit(dataset1)
summary(dataset1)
##       Age              BP         FBS.over.120     Cholesterol   
##  Min.   :29.00   Min.   : 94.0   Min.   :0.0000   Min.   :126.0  
##  1st Qu.:48.00   1st Qu.:120.0   1st Qu.:0.0000   1st Qu.:213.0  
##  Median :55.00   Median :130.0   Median :0.0000   Median :245.0  
##  Mean   :54.43   Mean   :131.3   Mean   :0.1481   Mean   :249.7  
##  3rd Qu.:61.00   3rd Qu.:140.0   3rd Qu.:0.0000   3rd Qu.:280.0  
##  Max.   :77.00   Max.   :200.0   Max.   :1.0000   Max.   :564.0  
##  Chest.pain.type ST.depression  Number.of.vessels.fluro    Thallium    
##  Min.   :1.000   Min.   :0.00   Min.   :0.0000          Min.   :3.000  
##  1st Qu.:3.000   1st Qu.:0.00   1st Qu.:0.0000          1st Qu.:3.000  
##  Median :3.000   Median :0.80   Median :0.0000          Median :3.000  
##  Mean   :3.174   Mean   :1.05   Mean   :0.6704          Mean   :4.696  
##  3rd Qu.:4.000   3rd Qu.:1.60   3rd Qu.:1.0000          3rd Qu.:7.000  
##  Max.   :4.000   Max.   :6.20   Max.   :3.0000          Max.   :7.000  
##       Sex             Max.HR     
##  Min.   :0.0000   Min.   : 71.0  
##  1st Qu.:0.0000   1st Qu.:133.0  
##  Median :1.0000   Median :153.5  
##  Mean   :0.6778   Mean   :149.7  
##  3rd Qu.:1.0000   3rd Qu.:166.0  
##  Max.   :1.0000   Max.   :202.0

The na.omit function is used to remove the the rows that are containing with N/A values which helps in more accurate prediction.

Setting up the model

As The outcome is binary i.e the disease -Present in the body (or) absent in the body .I would be using the logistic regression approach .Since it gives the probability of the occurrence of the event based on the independent variables .The most of the variables i filtered are independent variables.

The data model i would be going to examine is

Building models

I am going to build four different models.

The prior intercept That I am considering is the range of the heartbeat. The data on the range of the heartbeat is gathered by analyzing the summary function. Hence the prior intercept I selected is (71,65.5) because the heartbeat is ranging from 71 to 202.

ggpairs(dataset1)

plot_normal(mean = 71,sd=65.5)

The explanatory analysis is performed by using the ggpairs function ,where in which the correlation o fthe heart beat is negative with most of the parameters I considered.

Implementing the model

Model-1(Full model)

  1. Implement a full model, including all the relevant variables as predictors. If you have chosen to exclude any variables, explain why.

    I am assuming that(Age+BP+FBS.over.120+Cholesterol+Chest.pain.type+ST.depression+Number.of.vessels.fluro+Thallium+Sex+Max.HR) as predictors and all the predictor variables are not statistically significant with each other .I am going to set up the heart beat range as the prior_intercept while the prior normal as the week prior normal .Hence it is set as auto scale.

model_1 <- stan_glm(
  Max.HR~., 
  data = dataset1, family = gaussian,
  prior_intercept = normal(71,65.5),
  prior = normal(0, 1, autoscale = TRUE), 
  prior_aux = exponential(1, autoscale = TRUE),
  chains = 4, iter = 5000*2, seed = 84735,prior_PD = TRUE)
## 
## SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 4.5e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.45 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 1: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 1: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 1: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 1: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 1: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 1: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 1: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 1: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 1: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 1: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 1: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.143027 seconds (Warm-up)
## Chain 1:                0.155151 seconds (Sampling)
## Chain 1:                0.298178 seconds (Total)
## Chain 1: 
## 
## SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2).
## Chain 2: 
## Chain 2: Gradient evaluation took 9e-06 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2: 
## Chain 2: 
## Chain 2: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 2: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 2: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 2: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 2: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 2: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 2: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 2: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 2: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 2: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 2: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 2: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 2: 
## Chain 2:  Elapsed Time: 0.156489 seconds (Warm-up)
## Chain 2:                0.176713 seconds (Sampling)
## Chain 2:                0.333202 seconds (Total)
## Chain 2: 
## 
## SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3).
## Chain 3: 
## Chain 3: Gradient evaluation took 6e-06 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.06 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3: 
## Chain 3: 
## Chain 3: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 3: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 3: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 3: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 3: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 3: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 3: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 3: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 3: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 3: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 3: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 3: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 3: 
## Chain 3:  Elapsed Time: 0.164317 seconds (Warm-up)
## Chain 3:                0.136782 seconds (Sampling)
## Chain 3:                0.301099 seconds (Total)
## Chain 3: 
## 
## SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4).
## Chain 4: 
## Chain 4: Gradient evaluation took 9e-06 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4: 
## Chain 4: 
## Chain 4: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 4: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 4: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 4: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 4: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 4: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 4: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 4: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 4: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 4: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 4: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 4: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 4: 
## Chain 4:  Elapsed Time: 0.13961 seconds (Warm-up)
## Chain 4:                0.177023 seconds (Sampling)
## Chain 4:                0.316633 seconds (Total)
## Chain 4:
summary(model_1)
## 
## Model Info:
##  function:     stan_glm
##  family:       gaussian [identity]
##  formula:      Max.HR ~ .
##  algorithm:    sampling
##  sample:       20000 (posterior sample size)
##  priors:       see help('prior_summary')
##  observations: 270
##  predictors:   10
## 
## Estimates:
##                           mean   sd     10%    50%    90% 
## (Intercept)               68.4  277.4 -285.4   67.2  422.6
## Age                        0.0    2.6   -3.3    0.0    3.3
## BP                         0.0    1.3   -1.6    0.0    1.7
## FBS.over.120              -0.8   65.2  -84.3   -0.4   83.0
## Cholesterol                0.0    0.4   -0.6    0.0    0.6
## Chest.pain.type           -0.1   24.2  -31.3    0.1   31.0
## ST.depression              0.0   20.4  -26.1   -0.1   26.1
## Number.of.vessels.fluro    0.0   24.4  -31.7    0.0   31.7
## Thallium                   0.0   12.0  -15.4   -0.1   15.4
## Sex                       -0.2   49.6  -63.6    0.0   62.9
## sigma                     23.2   22.9    2.4   16.2   53.2
## 
## MCMC diagnostics
##                         mcse Rhat n_eff
## (Intercept)             1.8  1.0  24293
## Age                     0.0  1.0  24868
## BP                      0.0  1.0  23988
## FBS.over.120            0.4  1.0  23997
## Cholesterol             0.0  1.0  25961
## Chest.pain.type         0.2  1.0  23869
## ST.depression           0.1  1.0  22375
## Number.of.vessels.fluro 0.2  1.0  24384
## Thallium                0.1  1.0  24270
## Sex                     0.3  1.0  23243
## sigma                   0.1  1.0  27181
## log-posterior           0.0  1.0  10360
## 
## For each parameter, mcse is Monte Carlo standard error, n_eff is a crude measure of effective sample size, and Rhat is the potential scale reduction factor on split chains (at convergence Rhat=1).

The summary of the function determines that the prior intercept is with the mean of 68.4 and the standard deviation 277.4, with the value of 422.6 in 90% confidence interval.The rhat value is 1 for all the predictor variables.

Check the reliability of the MCMC simulations by reporting appropriate graphical and numerical measures and interpreting them.

mcmc_trace(model_1, size = 0.1)

mcmc_dens_overlay(model_1)

mcmc_acf(model_1)

neff_ratio(model_1)
##             (Intercept)                     Age                      BP 
##                 1.21465                 1.24340                 1.19940 
##            FBS.over.120             Cholesterol         Chest.pain.type 
##                 1.19985                 1.29805                 1.19345 
##           ST.depression Number.of.vessels.fluro                Thallium 
##                 1.11875                 1.21920                 1.21350 
##                     Sex                   sigma 
##                 1.16215                 1.35905
rhat(model_1)
##             (Intercept)                     Age                      BP 
##               1.0000148               0.9998946               0.9998509 
##            FBS.over.120             Cholesterol         Chest.pain.type 
##               1.0000517               1.0001498               1.0002919 
##           ST.depression Number.of.vessels.fluro                Thallium 
##               0.9999149               1.0001425               0.9999348 
##                     Sex                   sigma 
##               0.9998162               0.9999887

The mcmc_trance function is used to denote the chains .The chains produced for the above model appears to be mixed well together .Hence they are considered as the stable.

The mcmc overlay function represents the probability density function of each parameter overlap with each other well .In the above graph papers that like sufficiently overlapped with each other.

By diagnosing the Rhat values it clearly appears that Rhat values are equal to 1 in all the cases above indicating that there are no issues with autocorrelation .

prior_summary(model_1)
## Priors for model 'model_1' 
## ------
## Intercept (after predictors centered)
##  ~ normal(location = 71, scale = 66)
## 
## Coefficients
##   Specified prior:
##     ~ normal(location = [0,0,0,...], scale = [1,1,1,...])
##   Adjusted prior:
##     ~ normal(location = [0,0,0,...], scale = [ 2.54, 1.30,65.09,...])
## 
## Auxiliary (sigma)
##   Specified prior:
##     ~ exponential(rate = 1)
##   Adjusted prior:
##     ~ exponential(rate = 0.043)
## ------
## See help('prior_summary.stanreg') for more details
  1. Interpret the results of the model’s parameters’ PDFs - explain whether is support for the hypotheses you posited earlier, one hypothesis at a time, by connecting the results to the hypothesis’ specification.

    The prior_summary function gives the summary of the prior of the specified model .As the summary function suggests that most of the functions I considered is mostly true.The hypothesis I considered for the prior summary is mostly turned out to be true since the values of the prior normal did not vary drastically.

set.seed(84735)
posteriorpredict=posterior_predict(model_1)
posterior_interval(posteriorpredict,prob=0.95)
##           2.5%    97.5%
## 1   -158.00778 295.6741
## 2   -262.96589 407.3006
## 3   -102.61325 243.8741
## 4   -105.34161 245.0872
## 5   -135.33827 274.6445
## 6   -119.29196 258.8661
## 7   -114.19625 256.1693
## 8   -103.38281 244.5444
## 9   -111.69581 255.4053
## 10  -213.45535 357.4354
## 11   -97.40839 238.3196
## 12  -101.44728 247.5567
## 13   -96.20570 241.2249
## 14  -136.92320 278.8759
## 15  -107.35876 250.8213
## 16  -144.09746 283.7585
## 17  -120.83819 262.3766
## 18  -149.35894 287.0005
## 19  -134.71715 275.6778
## 20  -139.50415 285.8933
## 21  -128.18235 270.5599
## 22   -98.25602 241.0189
## 23  -111.33449 250.0054
## 24  -109.85958 251.0453
## 25  -139.89956 283.2596
## 26  -101.18156 244.1473
## 27  -109.19281 251.4002
## 28  -104.18048 242.6526
## 29  -108.69380 249.6187
## 30  -150.32866 292.2687
## 31   -90.06356 230.6737
## 32  -120.44997 261.6519
## 33  -124.76692 267.5292
## 34  -157.50361 297.1136
## 35  -101.40018 245.9790
## 36  -141.59009 281.3582
## 37  -103.40836 246.8978
## 38  -133.34895 277.3536
## 39  -110.05277 252.3191
## 40   -99.42348 241.1643
## 41  -122.21979 263.8389
## 42  -111.56784 255.8134
## 43   -92.72775 236.6822
## 44  -155.65853 300.5842
## 45  -125.08360 265.6709
## 46  -126.85289 268.9268
## 47  -140.55250 285.2587
## 48  -113.04301 253.7173
## 49  -146.75545 290.2476
## 50  -148.34737 292.1959
## 51  -113.89767 252.8348
## 52  -136.64040 276.7356
## 53  -183.21926 325.4019
## 54  -128.40098 270.1167
## 55  -111.10027 252.2010
## 56  -133.81035 277.8151
## 57  -137.07644 276.9251
## 58  -143.08642 285.8389
## 59  -135.15238 279.4056
## 60  -106.79296 247.7315
## 61  -161.04510 297.2959
## 62  -111.17450 251.9221
## 63   -99.91699 241.4805
## 64  -133.86536 276.2721
## 65  -157.06393 298.2776
## 66   -95.46118 238.7804
## 67   -97.01244 238.4794
## 68  -152.63902 298.8871
## 69  -108.14546 246.7350
## 70  -106.23359 243.3181
## 71  -128.48127 271.9234
## 72  -126.08874 267.2351
## 73  -126.12001 270.3636
## 74  -132.31979 277.0207
## 75  -133.47540 272.4917
## 76  -142.06893 283.6357
## 77  -130.35565 271.2062
## 78  -140.07832 283.6633
## 79  -115.88258 257.3948
## 80   -96.94956 238.0086
## 81  -104.36789 245.7848
## 82  -125.72276 268.9277
## 83  -101.22634 244.3718
## 84  -106.02672 246.9353
## 85  -102.98021 244.7546
## 86  -134.77254 277.9129
## 87  -137.99796 276.6721
## 88  -189.73272 334.8687
## 89  -126.19559 265.9252
## 90  -110.23564 253.8595
## 91  -101.93326 244.8761
## 92  -120.97490 261.2569
## 93  -102.39254 242.6494
## 94  -106.29853 245.1135
## 95  -111.33541 250.6213
## 96  -100.75621 238.7396
## 97  -110.13505 252.4161
## 98  -114.33470 255.5258
## 99  -118.12936 265.0873
## 100 -107.07551 247.9782
## 101 -133.94802 281.0749
## 102 -105.49460 247.6201
## 103 -106.22783 248.2406
## 104 -175.13568 315.7617
## 105  -98.43142 238.5458
## 106  -99.13581 240.7373
## 107 -109.61091 247.8879
## 108 -131.33399 273.7565
## 109 -112.83212 252.9391
## 110 -137.65195 280.6980
## 111 -169.89244 314.5338
## 112 -118.30835 260.8246
## 113 -124.24891 269.0011
## 114 -136.92295 279.9930
## 115 -115.31642 256.4062
## 116 -111.64506 252.8278
## 117  -99.40387 241.9286
## 118 -231.11345 373.4977
## 119 -149.07308 293.1604
## 120 -131.54012 273.0641
## 121 -140.09030 276.2120
## 122 -126.61408 267.7680
## 123  -99.73624 243.4220
## 124 -142.22985 283.8270
## 125  -83.63741 224.3571
## 126 -117.99212 259.9401
## 127 -114.75119 257.0612
## 128 -104.96657 247.9505
## 129  -96.79779 236.3482
## 130 -138.40548 277.8545
## 131 -126.66212 268.5875
## 132 -113.33220 253.9155
## 133 -106.95384 249.6328
## 134 -107.64303 246.5552
## 135  -96.58178 240.0884
## 136 -113.21846 255.6189
## 137 -121.05203 261.3707
## 138 -118.62741 262.7470
## 139 -135.35587 279.0530
## 140 -103.36632 244.1865
## 141 -117.25565 257.4153
## 142  -97.33564 239.5435
## 143  -90.17701 231.0615
## 144 -118.12133 260.9010
## 145 -157.79674 306.1373
## 146 -112.05480 253.6583
## 147 -100.19824 242.1300
## 148 -132.30123 274.2670
## 149 -124.07981 265.7279
## 150 -118.65862 259.6339
## 151 -108.12064 250.6074
## 152 -100.71770 244.4160
## 153 -108.99249 249.5901
## 154 -126.91450 266.9694
## 155  -94.60612 236.9024
## 156 -129.59110 267.7707
## 157 -178.75146 315.1195
## 158 -112.51282 252.1901
## 159 -131.88890 273.6052
## 160 -184.20472 326.9156
## 161 -169.09611 306.3836
## 162 -110.82357 254.4854
## 163  -97.83054 239.0752
## 164 -121.34893 264.0650
## 165 -116.50927 260.9838
## 166 -158.95517 304.0637
## 167 -113.95057 254.3329
## 168 -124.20801 265.3512
## 169 -108.94255 250.2153
## 170 -150.63749 295.2180
## 171 -167.41814 311.6040
## 172 -144.01807 284.3545
## 173 -135.68069 274.5731
## 174 -113.47055 252.7420
## 175 -155.05892 297.8833
## 176 -168.85069 306.7464
## 177 -173.39853 309.9416
## 178 -127.43010 267.7891
## 179 -149.08537 286.8923
## 180  -95.79632 241.9892
## 181 -132.20448 273.9267
## 182 -163.86524 307.6870
## 183 -131.16008 272.1930
## 184 -125.49248 266.5666
## 185 -149.46268 292.4159
## 186 -102.29407 245.7991
## 187 -101.65422 242.3243
## 188 -168.38237 305.5580
## 189 -146.30324 290.4313
## 190 -136.31441 275.3164
## 191  -89.78971 232.2468
## 192 -133.33675 277.0037
## 193 -120.96203 261.5370
## 194 -125.17336 268.3648
## 195 -133.62069 274.0111
## 196 -103.30542 247.4296
## 197 -121.17622 258.1054
## 198 -109.57962 247.8009
## 199 -147.50503 289.4001
## 200 -162.51026 302.8358
## 201 -108.03887 249.8694
## 202 -117.57413 255.7281
## 203 -110.95340 251.7274
## 204 -105.22947 242.9276
## 205 -110.42003 253.3404
## 206 -157.16114 305.0424
## 207 -124.65226 265.9055
## 208 -102.04007 245.4607
## 209 -110.49630 251.9335
## 210 -133.02795 275.9990
## 211 -147.68283 291.9713
## 212 -127.31355 268.5910
## 213 -110.66077 253.4143
## 214 -134.40061 274.3521
## 215 -141.90970 287.3261
## 216 -118.80678 263.6353
## 217 -105.29829 246.5081
## 218 -118.04179 259.3919
## 219  -94.50680 234.6299
## 220 -109.12112 251.9488
## 221 -115.88336 255.1391
## 222 -116.79995 257.7387
## 223 -107.43944 251.3894
## 224 -172.83308 313.0805
## 225 -132.69839 278.7157
## 226 -110.53615 256.9472
## 227 -101.94209 244.5311
## 228 -169.02406 310.3664
## 229 -159.56561 302.8405
## 230 -126.92524 272.5028
## 231 -118.25432 259.3952
## 232 -115.22892 256.7401
## 233 -105.23800 245.8092
## 234  -98.11144 238.5409
## 235 -130.80913 269.7785
## 236 -242.08613 378.9950
## 237 -104.68283 249.1427
## 238 -125.22108 267.2686
## 239  -92.05316 236.2209
## 240 -113.21391 252.3610
## 241 -170.82036 315.8545
## 242 -116.07081 259.4851
## 243  -99.23439 246.9751
## 244 -141.58340 285.0544
## 245 -102.55494 244.6811
## 246  -95.59278 237.6724
## 247 -114.80435 257.2240
## 248 -113.39979 256.4642
## 249 -115.70479 259.3808
## 250 -125.81888 268.0905
## 251 -101.73158 247.3796
## 252 -107.41434 249.2960
## 253 -109.76311 251.7733
## 254 -108.21695 249.2987
## 255 -131.97374 271.5275
## 256 -150.20478 292.1436
## 257 -127.97777 267.9701
## 258 -119.36366 256.7650
## 259 -114.37555 256.5720
## 260 -102.99655 244.6851
## 261 -119.48619 257.3255
## 262 -117.08806 258.9993
## 263 -103.21746 244.2117
## 264  -96.25070 237.8444
## 265 -110.59478 251.5750
## 266 -153.71657 295.7314
## 267 -113.64440 255.5304
## 268 -108.59156 252.4842
## 269  -97.88429 240.1845
## 270 -149.45669 290.4658
summary(posteriorpredict)
##        1                  2                 3                 4          
##  Min.   :-411.197   Min.   :-617.63   Min.   :-333.36   Min.   :-320.94  
##  1st Qu.:  -7.231   1st Qu.: -44.27   1st Qu.:  12.80   1st Qu.:  11.93  
##  Median :  69.922   Median :  72.25   Median :  71.31   Median :  71.57  
##  Mean   :  70.429   Mean   :  71.59   Mean   :  71.20   Mean   :  71.12  
##  3rd Qu.: 148.322   3rd Qu.: 189.47   3rd Qu.: 129.93   3rd Qu.: 129.68  
##  Max.   : 590.500   Max.   : 765.19   Max.   : 463.30   Max.   : 491.31  
##        5                  6                  7                  8          
##  Min.   :-348.573   Min.   :-540.983   Min.   :-449.383   Min.   :-388.63  
##  1st Qu.:   1.511   1st Qu.:   7.226   1st Qu.:   7.229   1st Qu.:  12.31  
##  Median :  72.068   Median :  69.436   Median :  70.543   Median :  70.89  
##  Mean   :  71.283   Mean   :  70.491   Mean   :  69.962   Mean   :  70.74  
##  3rd Qu.: 140.406   3rd Qu.: 133.819   3rd Qu.: 132.388   3rd Qu.: 128.83  
##  Max.   : 627.325   Max.   : 561.612   Max.   : 726.729   Max.   : 495.75  
##        9                  10                11                12         
##  Min.   :-409.860   Min.   :-531.63   Min.   :-403.06   Min.   :-347.04  
##  1st Qu.:   8.981   1st Qu.: -27.35   1st Qu.:  13.80   1st Qu.:  12.07  
##  Median :  71.477   Median :  72.98   Median :  70.63   Median :  70.67  
##  Mean   :  71.144   Mean   :  71.45   Mean   :  70.72   Mean   :  71.28  
##  3rd Qu.: 133.273   3rd Qu.: 169.91   3rd Qu.: 128.04   3rd Qu.: 130.00  
##  Max.   : 588.653   Max.   : 643.42   Max.   : 460.00   Max.   : 478.19  
##        13                14                  15                16          
##  Min.   :-447.62   Min.   :-331.6175   Min.   :-409.88   Min.   :-336.007  
##  1st Qu.:  12.88   1st Qu.:   0.6916   1st Qu.:  11.41   1st Qu.:  -2.246  
##  Median :  71.20   Median :  72.3307   Median :  71.38   Median :  70.701  
##  Mean   :  71.29   Mean   :  71.7353   Mean   :  71.53   Mean   :  71.034  
##  3rd Qu.: 129.07   3rd Qu.: 141.8790   3rd Qu.: 132.46   3rd Qu.: 145.117  
##  Max.   : 435.63   Max.   : 493.2132   Max.   : 467.06   Max.   : 492.574  
##        17                 18                 19                 20           
##  Min.   :-488.453   Min.   :-422.517   Min.   :-490.368   Min.   :-403.8621  
##  1st Qu.:   6.586   1st Qu.:  -4.293   1st Qu.:   1.472   1st Qu.:  -0.5977  
##  Median :  71.238   Median :  69.876   Median :  71.539   Median :  71.2935  
##  Mean   :  71.255   Mean   :  70.111   Mean   :  70.790   Mean   :  71.3837  
##  3rd Qu.: 136.584   3rd Qu.: 143.934   3rd Qu.: 141.029   3rd Qu.: 142.6228  
##  Max.   : 501.219   Max.   : 594.672   Max.   : 489.602   Max.   : 589.1934  
##        21                 22                23                 24          
##  Min.   :-520.463   Min.   :-492.67   Min.   :-355.060   Min.   :-384.286  
##  1st Qu.:   2.753   1st Qu.:  13.02   1st Qu.:   9.186   1st Qu.:   9.427  
##  Median :  71.443   Median :  71.44   Median :  70.320   Median :  70.164  
##  Mean   :  70.355   Mean   :  71.44   Mean   :  70.646   Mean   :  70.669  
##  3rd Qu.: 137.795   3rd Qu.: 129.21   3rd Qu.: 133.497   3rd Qu.: 131.458  
##  Max.   : 474.692   Max.   : 483.83   Max.   : 545.250   Max.   : 522.441  
##        25                  26                27                28         
##  Min.   :-402.1486   Min.   :-354.64   Min.   :-409.57   Min.   :-402.01  
##  1st Qu.:   0.3878   1st Qu.:  13.25   1st Qu.:  10.14   1st Qu.:  12.20  
##  Median :  70.2167   Median :  71.55   Median :  71.72   Median :  70.97  
##  Mean   :  70.9480   Mean   :  71.36   Mean   :  71.28   Mean   :  71.25  
##  3rd Qu.: 141.2706   3rd Qu.: 130.24   3rd Qu.: 132.83   3rd Qu.: 130.56  
##  Max.   : 553.2390   Max.   : 632.23   Max.   : 567.23   Max.   : 513.11  
##        29                30                 31                32          
##  Min.   :-405.64   Min.   :-513.713   Min.   :-361.43   Min.   :-453.288  
##  1st Qu.:  10.56   1st Qu.:  -6.742   1st Qu.:  16.14   1st Qu.:   5.037  
##  Median :  71.72   Median :  69.892   Median :  70.69   Median :  71.316  
##  Mean   :  70.96   Mean   :  69.968   Mean   :  70.76   Mean   :  71.088  
##  3rd Qu.: 130.56   3rd Qu.: 146.280   3rd Qu.: 125.86   3rd Qu.: 135.929  
##  Max.   : 470.66   Max.   : 617.303   Max.   : 537.98   Max.   : 578.401  
##        33                 34                 35                36           
##  Min.   :-355.563   Min.   :-438.287   Min.   :-359.33   Min.   :-403.3822  
##  1st Qu.:   3.806   1st Qu.:  -5.749   1st Qu.:  11.83   1st Qu.:  -0.4558  
##  Median :  71.320   Median :  72.069   Median :  69.98   Median :  71.2452  
##  Mean   :  70.898   Mean   :  71.362   Mean   :  70.95   Mean   :  70.3392  
##  3rd Qu.: 137.476   3rd Qu.: 149.108   3rd Qu.: 129.61   3rd Qu.: 140.5517  
##  Max.   : 585.841   Max.   : 730.120   Max.   : 485.59   Max.   : 483.6220  
##        37                38                 39                 40         
##  Min.   :-366.78   Min.   :-523.538   Min.   :-424.419   Min.   :-334.73  
##  1st Qu.:  11.93   1st Qu.:   1.813   1st Qu.:   8.524   1st Qu.:  12.44  
##  Median :  70.84   Median :  71.481   Median :  70.349   Median :  70.67  
##  Mean   :  70.95   Mean   :  71.668   Mean   :  70.511   Mean   :  70.85  
##  3rd Qu.: 130.58   3rd Qu.: 141.511   3rd Qu.: 132.526   3rd Qu.: 129.05  
##  Max.   : 490.93   Max.   : 489.918   Max.   : 468.996   Max.   : 489.35  
##        41                 42                 43                44          
##  Min.   :-485.158   Min.   :-316.435   Min.   :-283.92   Min.   :-482.733  
##  1st Qu.:   5.504   1st Qu.:   8.755   1st Qu.:  14.75   1st Qu.:  -8.157  
##  Median :  70.975   Median :  71.852   Median :  71.33   Median :  70.134  
##  Mean   :  71.247   Mean   :  71.255   Mean   :  71.10   Mean   :  69.618  
##  3rd Qu.: 137.350   3rd Qu.: 134.427   3rd Qu.: 127.16   3rd Qu.: 146.611  
##  Max.   : 474.642   Max.   : 459.779   Max.   : 466.51   Max.   : 572.136  
##        45                 46                47                 48          
##  Min.   :-341.544   Min.   :-350.30   Min.   :-367.982   Min.   :-347.831  
##  1st Qu.:   3.386   1st Qu.:   3.03   1st Qu.:  -1.471   1st Qu.:   7.525  
##  Median :  69.561   Median :  69.68   Median :  71.112   Median :  70.102  
##  Mean   :  70.034   Mean   :  70.59   Mean   :  70.506   Mean   :  70.325  
##  3rd Qu.: 135.720   3rd Qu.: 137.51   3rd Qu.: 142.549   3rd Qu.: 132.646  
##  Max.   : 494.134   Max.   : 470.36   Max.   : 523.830   Max.   : 458.801  
##        49                 50                 51                 52           
##  Min.   :-383.909   Min.   :-501.219   Min.   :-384.905   Min.   :-388.4070  
##  1st Qu.:  -4.993   1st Qu.:  -3.877   1st Qu.:   8.454   1st Qu.:   0.3531  
##  Median :  72.328   Median :  71.098   Median :  71.046   Median :  69.7409  
##  Mean   :  71.656   Mean   :  71.032   Mean   :  71.157   Mean   :  70.1362  
##  3rd Qu.: 148.482   3rd Qu.: 146.439   3rd Qu.: 134.050   3rd Qu.: 139.8470  
##  Max.   : 571.978   Max.   : 480.598   Max.   : 450.069   Max.   : 566.7491  
##        53                54                 55                 56           
##  Min.   :-525.93   Min.   :-411.341   Min.   :-366.286   Min.   :-356.5295  
##  1st Qu.: -15.80   1st Qu.:   3.648   1st Qu.:   9.606   1st Qu.:   0.5407  
##  Median :  71.27   Median :  71.136   Median :  72.086   Median :  70.2662  
##  Mean   :  70.95   Mean   :  71.301   Mean   :  71.646   Mean   :  70.6473  
##  3rd Qu.: 158.20   3rd Qu.: 139.025   3rd Qu.: 133.571   3rd Qu.: 140.4157  
##  Max.   : 581.04   Max.   : 535.220   Max.   : 578.476   Max.   : 559.6981  
##        57                 58                 59                  60         
##  Min.   :-534.396   Min.   :-488.533   Min.   :-338.5381   Min.   :-528.27  
##  1st Qu.:   1.877   1st Qu.:  -4.279   1st Qu.:   0.5888   1st Qu.:  11.25  
##  Median :  69.919   Median :  69.912   Median :  72.4543   Median :  71.01  
##  Mean   :  70.447   Mean   :  70.035   Mean   :  71.8368   Mean   :  71.05  
##  3rd Qu.: 140.047   3rd Qu.: 144.439   3rd Qu.: 142.5730   3rd Qu.: 130.74  
##  Max.   : 584.697   Max.   : 558.006   Max.   : 509.7292   Max.   : 636.13  
##        61                 62                63                64          
##  Min.   :-489.548   Min.   :-354.95   Min.   :-333.15   Min.   :-364.984  
##  1st Qu.:  -6.441   1st Qu.:   9.45   1st Qu.:  11.29   1st Qu.:   2.063  
##  Median :  69.725   Median :  70.99   Median :  71.49   Median :  71.662  
##  Mean   :  70.093   Mean   :  70.97   Mean   :  70.61   Mean   :  71.886  
##  3rd Qu.: 147.396   3rd Qu.: 131.98   3rd Qu.: 128.92   3rd Qu.: 141.093  
##  Max.   : 553.789   Max.   : 575.54   Max.   : 530.82   Max.   : 685.179  
##        65                 66                67                68          
##  Min.   :-366.722   Min.   :-382.53   Min.   :-397.32   Min.   :-482.641  
##  1st Qu.:  -7.481   1st Qu.:  14.30   1st Qu.:  14.68   1st Qu.:  -5.335  
##  Median :  69.598   Median :  71.25   Median :  71.37   Median :  70.498  
##  Mean   :  70.668   Mean   :  71.25   Mean   :  71.18   Mean   :  71.291  
##  3rd Qu.: 149.685   3rd Qu.: 127.86   3rd Qu.: 128.33   3rd Qu.: 147.982  
##  Max.   : 598.621   Max.   : 496.84   Max.   : 536.49   Max.   : 623.833  
##        69                70                71                 72          
##  Min.   :-396.71   Min.   :-353.52   Min.   :-429.923   Min.   :-346.682  
##  1st Qu.:  10.44   1st Qu.:  11.45   1st Qu.:   2.138   1st Qu.:   3.968  
##  Median :  71.48   Median :  70.84   Median :  70.862   Median :  72.031  
##  Mean   :  70.99   Mean   :  70.43   Mean   :  70.647   Mean   :  71.186  
##  3rd Qu.: 131.78   3rd Qu.: 129.62   3rd Qu.: 137.913   3rd Qu.: 138.327  
##  Max.   : 605.65   Max.   : 492.01   Max.   : 540.051   Max.   : 474.376  
##        73                 74                 75                 76          
##  Min.   :-311.800   Min.   :-454.863   Min.   :-356.206   Min.   :-380.907  
##  1st Qu.:   4.835   1st Qu.:   1.331   1st Qu.:   1.495   1st Qu.:  -3.752  
##  Median :  70.893   Median :  70.797   Median :  70.669   Median :  71.724  
##  Mean   :  71.862   Mean   :  71.439   Mean   :  70.614   Mean   :  71.208  
##  3rd Qu.: 139.147   3rd Qu.: 140.970   3rd Qu.: 139.871   3rd Qu.: 145.860  
##  Max.   : 547.002   Max.   : 553.516   Max.   : 537.455   Max.   : 552.658  
##        77                78                 79                 80         
##  Min.   :-372.61   Min.   :-503.009   Min.   :-316.346   Min.   :-334.10  
##  1st Qu.:   1.77   1st Qu.:  -1.817   1st Qu.:   7.193   1st Qu.:  14.02  
##  Median :  70.40   Median :  70.623   Median :  71.730   Median :  70.20  
##  Mean   :  70.23   Mean   :  70.940   Mean   :  71.029   Mean   :  70.60  
##  3rd Qu.: 139.26   3rd Qu.: 143.046   3rd Qu.: 134.767   3rd Qu.: 127.34  
##  Max.   : 466.37   Max.   : 502.400   Max.   : 594.330   Max.   : 427.01  
##        81                82                 83                84          
##  Min.   :-512.97   Min.   :-377.256   Min.   :-306.73   Min.   :-408.221  
##  1st Qu.:  12.11   1st Qu.:   2.996   1st Qu.:  12.50   1st Qu.:   9.966  
##  Median :  70.59   Median :  69.642   Median :  71.57   Median :  71.185  
##  Mean   :  70.99   Mean   :  70.629   Mean   :  71.48   Mean   :  70.874  
##  3rd Qu.: 129.68   3rd Qu.: 137.764   3rd Qu.: 130.08   3rd Qu.: 131.980  
##  Max.   : 488.05   Max.   : 548.832   Max.   : 619.13   Max.   : 555.700  
##        85                86                  87                 88         
##  Min.   :-324.38   Min.   :-362.3839   Min.   :-421.502   Min.   :-407.13  
##  1st Qu.:  11.53   1st Qu.:   0.2786   1st Qu.:  -1.011   1st Qu.: -17.87  
##  Median :  70.06   Median :  70.4060   Median :  70.615   Median :  72.31  
##  Mean   :  70.19   Mean   :  71.3721   Mean   :  70.303   Mean   :  71.74  
##  3rd Qu.: 128.54   3rd Qu.: 142.5845   3rd Qu.: 141.637   3rd Qu.: 160.84  
##  Max.   : 561.79   Max.   : 550.9005   Max.   : 502.822   Max.   : 612.57  
##        89                 90                91                92          
##  Min.   :-391.733   Min.   :-387.84   Min.   :-412.66   Min.   :-443.856  
##  1st Qu.:   4.008   1st Qu.:   8.08   1st Qu.:  13.03   1st Qu.:   6.534  
##  Median :  71.379   Median :  71.13   Median :  70.98   Median :  72.065  
##  Mean   :  71.212   Mean   :  71.38   Mean   :  71.50   Mean   :  71.199  
##  3rd Qu.: 139.364   3rd Qu.: 134.61   3rd Qu.: 130.17   3rd Qu.: 135.859  
##  Max.   : 502.996   Max.   : 506.07   Max.   : 445.96   Max.   : 579.682  
##        93                94                95                96         
##  Min.   :-306.31   Min.   :-333.15   Min.   :-504.44   Min.   :-339.97  
##  1st Qu.:  12.84   1st Qu.:  10.87   1st Qu.:  10.38   1st Qu.:  11.55  
##  Median :  70.60   Median :  70.55   Median :  71.49   Median :  70.33  
##  Mean   :  70.57   Mean   :  70.07   Mean   :  70.56   Mean   :  70.19  
##  3rd Qu.: 128.17   3rd Qu.: 129.90   3rd Qu.: 131.14   3rd Qu.: 129.02  
##  Max.   : 460.96   Max.   : 517.34   Max.   : 537.53   Max.   : 493.69  
##        97                98                 99                100         
##  Min.   :-380.50   Min.   :-459.756   Min.   :-385.889   Min.   :-362.05  
##  1st Qu.:  10.89   1st Qu.:   8.296   1st Qu.:   7.675   1st Qu.:  10.54  
##  Median :  71.30   Median :  70.806   Median :  72.657   Median :  71.56  
##  Mean   :  71.48   Mean   :  70.423   Mean   :  72.213   Mean   :  71.16  
##  3rd Qu.: 132.44   3rd Qu.: 132.775   3rd Qu.: 136.832   3rd Qu.: 131.60  
##  Max.   : 734.19   Max.   : 596.176   Max.   : 514.875   Max.   : 480.36  
##       101                 102               103               104         
##  Min.   :-398.3221   Min.   :-419.59   Min.   :-314.50   Min.   :-490.85  
##  1st Qu.:  -0.5229   1st Qu.:  11.49   1st Qu.:  10.94   1st Qu.: -12.95  
##  Median :  70.8834   Median :  69.72   Median :  70.58   Median :  71.28  
##  Mean   :  70.6739   Mean   :  70.66   Mean   :  70.97   Mean   :  70.59  
##  3rd Qu.: 140.2295   3rd Qu.: 129.61   3rd Qu.: 131.44   3rd Qu.: 153.43  
##  Max.   : 621.4519   Max.   : 469.71   Max.   : 652.23   Max.   : 545.65  
##       105               106               107               108          
##  Min.   :-344.75   Min.   :-309.80   Min.   :-504.10   Min.   :-404.714  
##  1st Qu.:  13.59   1st Qu.:  13.19   1st Qu.:  10.03   1st Qu.:   2.237  
##  Median :  71.38   Median :  70.41   Median :  70.43   Median :  72.226  
##  Mean   :  71.12   Mean   :  70.89   Mean   :  70.21   Mean   :  71.452  
##  3rd Qu.: 128.98   3rd Qu.: 129.21   3rd Qu.: 130.32   3rd Qu.: 140.289  
##  Max.   : 653.50   Max.   : 665.86   Max.   : 442.24   Max.   : 487.390  
##       109                110                 111               112         
##  Min.   :-348.937   Min.   :-375.9020   Min.   :-439.46   Min.   :-370.93  
##  1st Qu.:   7.994   1st Qu.:   0.1404   1st Qu.: -11.59   1st Qu.:   4.49  
##  Median :  69.874   Median :  70.7958   Median :  72.25   Median :  70.83  
##  Mean   :  70.468   Mean   :  70.5872   Mean   :  71.72   Mean   :  70.43  
##  3rd Qu.: 132.403   3rd Qu.: 140.3446   3rd Qu.: 154.36   3rd Qu.: 135.74  
##  Max.   : 550.462   Max.   : 510.4342   Max.   : 504.65   Max.   : 477.66  
##       113               114                 115                116          
##  Min.   :-444.74   Min.   :-327.4352   Min.   :-306.692   Min.   :-354.461  
##  1st Qu.:   6.22   1st Qu.:   0.0492   1st Qu.:   6.247   1st Qu.:   9.384  
##  Median :  70.80   Median :  70.5386   Median :  70.347   Median :  71.914  
##  Mean   :  71.97   Mean   :  70.8711   Mean   :  70.839   Mean   :  71.736  
##  3rd Qu.: 137.84   3rd Qu.: 141.2629   3rd Qu.: 135.293   3rd Qu.: 133.621  
##  Max.   : 471.19   Max.   : 546.2123   Max.   : 607.728   Max.   : 446.065  
##       117               118               119                120         
##  Min.   :-336.72   Min.   :-595.29   Min.   :-749.534   Min.   :-348.44  
##  1st Qu.:  12.48   1st Qu.: -30.37   1st Qu.:  -4.022   1st Qu.:   1.84  
##  Median :  70.04   Median :  72.05   Median :  71.810   Median :  69.75  
##  Mean   :  70.50   Mean   :  71.66   Mean   :  71.912   Mean   :  69.86  
##  3rd Qu.: 128.20   3rd Qu.: 174.05   3rd Qu.: 147.504   3rd Qu.: 138.24  
##  Max.   : 503.74   Max.   : 715.62   Max.   : 516.583   Max.   : 520.82  
##       121                 122                123               124           
##  Min.   :-356.3366   Min.   :-443.421   Min.   :-339.27   Min.   :-539.9084  
##  1st Qu.:  -0.1928   1st Qu.:   3.447   1st Qu.:  12.54   1st Qu.:  -0.2374  
##  Median :  71.5712   Median :  70.875   Median :  71.37   Median :  71.8033  
##  Mean   :  70.5604   Mean   :  70.590   Mean   :  71.57   Mean   :  72.2391  
##  3rd Qu.: 141.6608   3rd Qu.: 137.633   3rd Qu.: 130.76   3rd Qu.: 145.6020  
##  Max.   : 513.1220   Max.   : 456.201   Max.   : 546.11   Max.   : 643.7576  
##       125               126                127                128         
##  Min.   :-377.87   Min.   :-306.472   Min.   :-366.364   Min.   :-426.00  
##  1st Qu.:  18.14   1st Qu.:   6.678   1st Qu.:   7.813   1st Qu.:  10.93  
##  Median :  71.53   Median :  72.207   Median :  71.631   Median :  71.47  
##  Mean   :  70.98   Mean   :  71.828   Mean   :  70.867   Mean   :  71.30  
##  3rd Qu.: 124.34   3rd Qu.: 135.634   3rd Qu.: 133.269   3rd Qu.: 131.86  
##  Max.   : 439.19   Max.   : 483.493   Max.   : 773.975   Max.   : 472.70  
##       129               130                131                132          
##  Min.   :-405.28   Min.   :-410.809   Min.   :-447.548   Min.   :-525.053  
##  1st Qu.:  13.90   1st Qu.:  -2.151   1st Qu.:   4.487   1st Qu.:   8.763  
##  Median :  71.06   Median :  70.377   Median :  70.917   Median :  71.412  
##  Mean   :  71.01   Mean   :  69.869   Mean   :  70.924   Mean   :  70.406  
##  3rd Qu.: 128.10   3rd Qu.: 141.523   3rd Qu.: 137.719   3rd Qu.: 132.169  
##  Max.   : 416.26   Max.   : 652.394   Max.   : 496.210   Max.   : 887.240  
##       133                134               135               136          
##  Min.   :-356.216   Min.   :-337.16   Min.   :-269.03   Min.   :-528.112  
##  1st Qu.:   9.713   1st Qu.:  12.45   1st Qu.:  13.30   1st Qu.:   7.375  
##  Median :  71.034   Median :  70.62   Median :  70.74   Median :  71.764  
##  Mean   :  71.163   Mean   :  70.59   Mean   :  71.13   Mean   :  71.257  
##  3rd Qu.: 132.347   3rd Qu.: 129.70   3rd Qu.: 129.23   3rd Qu.: 135.735  
##  Max.   : 434.524   Max.   : 515.72   Max.   : 483.80   Max.   : 434.346  
##       137                138                139                 140         
##  Min.   :-374.843   Min.   :-381.624   Min.   :-407.5262   Min.   :-372.51  
##  1st Qu.:   8.007   1st Qu.:   5.852   1st Qu.:   0.1344   1st Qu.:  11.74  
##  Median :  71.863   Median :  69.377   Median :  71.2656   Median :  70.44  
##  Mean   :  71.876   Mean   :  69.926   Mean   :  71.0722   Mean   :  70.58  
##  3rd Qu.: 136.482   3rd Qu.: 135.434   3rd Qu.: 142.4424   3rd Qu.: 129.09  
##  Max.   : 555.726   Max.   : 487.293   Max.   : 730.2026   Max.   : 613.72  
##       141                142               143               144          
##  Min.   :-440.513   Min.   :-340.62   Min.   :-499.63   Min.   :-317.425  
##  1st Qu.:   8.214   1st Qu.:  14.34   1st Qu.:  16.99   1st Qu.:   7.079  
##  Median :  71.770   Median :  71.14   Median :  70.81   Median :  70.854  
##  Mean   :  71.002   Mean   :  71.23   Mean   :  71.03   Mean   :  70.983  
##  3rd Qu.: 133.896   3rd Qu.: 127.61   3rd Qu.: 125.83   3rd Qu.: 135.405  
##  Max.   : 595.338   Max.   : 422.61   Max.   : 428.62   Max.   : 530.456  
##       145                146                147               148          
##  Min.   :-490.759   Min.   :-368.962   Min.   :-345.52   Min.   :-353.207  
##  1st Qu.:  -9.257   1st Qu.:   8.781   1st Qu.:  12.77   1st Qu.:   0.845  
##  Median :  72.037   Median :  70.875   Median :  70.75   Median :  69.230  
##  Mean   :  72.178   Mean   :  70.364   Mean   :  70.82   Mean   :  69.724  
##  3rd Qu.: 152.288   3rd Qu.: 132.111   3rd Qu.: 129.80   3rd Qu.: 139.332  
##  Max.   : 540.729   Max.   : 441.857   Max.   : 471.49   Max.   : 594.592  
##       149               150                151                152         
##  Min.   :-369.19   Min.   :-383.887   Min.   :-334.753   Min.   :-440.09  
##  1st Qu.:   5.41   1st Qu.:   5.059   1st Qu.:   8.929   1st Qu.:  10.88  
##  Median :  71.07   Median :  70.611   Median :  70.477   Median :  71.29  
##  Mean   :  70.86   Mean   :  70.405   Mean   :  70.591   Mean   :  71.00  
##  3rd Qu.: 136.03   3rd Qu.: 135.286   3rd Qu.: 131.629   3rd Qu.: 130.52  
##  Max.   : 522.09   Max.   : 466.648   Max.   : 600.061   Max.   : 477.12  
##       153               154                155               156          
##  Min.   :-310.78   Min.   :-622.179   Min.   :-352.68   Min.   :-528.009  
##  1st Qu.:  10.02   1st Qu.:   4.437   1st Qu.:  14.26   1st Qu.:   2.712  
##  Median :  71.32   Median :  70.654   Median :  71.07   Median :  70.673  
##  Mean   :  70.89   Mean   :  71.201   Mean   :  71.22   Mean   :  70.373  
##  3rd Qu.: 132.26   3rd Qu.: 138.630   3rd Qu.: 128.47   3rd Qu.: 138.312  
##  Max.   : 544.63   Max.   : 535.712   Max.   : 452.50   Max.   : 492.153  
##       157               158                159                160         
##  Min.   :-476.60   Min.   :-362.375   Min.   :-409.067   Min.   :-440.09  
##  1st Qu.: -14.53   1st Qu.:   8.137   1st Qu.:   2.574   1st Qu.: -15.90  
##  Median :  71.48   Median :  71.098   Median :  71.297   Median :  71.27  
##  Mean   :  70.70   Mean   :  71.126   Mean   :  70.835   Mean   :  71.03  
##  3rd Qu.: 156.64   3rd Qu.: 133.640   3rd Qu.: 140.565   3rd Qu.: 157.85  
##  Max.   : 654.91   Max.   : 434.286   Max.   : 466.031   Max.   : 692.15  
##       161               162               163               164          
##  Min.   :-418.75   Min.   :-591.47   Min.   :-391.97   Min.   :-453.848  
##  1st Qu.: -10.66   1st Qu.:   9.99   1st Qu.:  14.23   1st Qu.:   4.645  
##  Median :  70.21   Median :  70.69   Median :  70.73   Median :  70.437  
##  Mean   :  70.52   Mean   :  71.58   Mean   :  71.14   Mean   :  70.295  
##  3rd Qu.: 152.93   3rd Qu.: 133.49   3rd Qu.: 128.35   3rd Qu.: 135.923  
##  Max.   : 585.88   Max.   : 490.65   Max.   : 564.85   Max.   : 528.592  
##       165                166                167                168          
##  Min.   :-326.618   Min.   :-440.536   Min.   :-580.106   Min.   :-409.545  
##  1st Qu.:   6.291   1st Qu.:  -6.341   1st Qu.:   8.439   1st Qu.:   4.351  
##  Median :  69.864   Median :  71.934   Median :  70.540   Median :  69.810  
##  Mean   :  70.480   Mean   :  72.102   Mean   :  70.612   Mean   :  70.366  
##  3rd Qu.: 133.691   3rd Qu.: 150.481   3rd Qu.: 133.690   3rd Qu.: 135.866  
##  Max.   : 539.879   Max.   : 535.096   Max.   : 503.863   Max.   : 598.818  
##       169                170                171               172           
##  Min.   :-287.425   Min.   :-462.577   Min.   :-388.90   Min.   :-378.3443  
##  1st Qu.:   9.419   1st Qu.:  -5.674   1st Qu.: -10.71   1st Qu.:  -0.7986  
##  Median :  72.084   Median :  71.299   Median :  71.32   Median :  71.3646  
##  Mean   :  71.174   Mean   :  70.888   Mean   :  71.20   Mean   :  71.2807  
##  3rd Qu.: 131.908   3rd Qu.: 147.008   3rd Qu.: 152.54   3rd Qu.: 143.6043  
##  Max.   : 637.971   Max.   : 572.258   Max.   : 606.39   Max.   : 497.3140  
##       173                174                175                176         
##  Min.   :-639.527   Min.   :-388.873   Min.   :-452.220   Min.   :-396.80  
##  1st Qu.:   2.104   1st Qu.:   8.618   1st Qu.:  -7.915   1st Qu.: -11.28  
##  Median :  71.038   Median :  71.240   Median :  70.309   Median :  70.00  
##  Mean   :  70.525   Mean   :  71.196   Mean   :  70.430   Mean   :  70.42  
##  3rd Qu.: 140.635   3rd Qu.: 133.908   3rd Qu.: 148.218   3rd Qu.: 152.42  
##  Max.   : 500.281   Max.   : 436.545   Max.   : 599.007   Max.   : 561.51  
##       177               178                179                180         
##  Min.   :-459.70   Min.   :-421.020   Min.   :-422.059   Min.   :-382.77  
##  1st Qu.: -11.90   1st Qu.:   1.812   1st Qu.:  -3.168   1st Qu.:  13.81  
##  Median :  72.36   Median :  71.789   Median :  69.887   Median :  70.67  
##  Mean   :  70.89   Mean   :  71.333   Mean   :  69.974   Mean   :  71.26  
##  3rd Qu.: 153.48   3rd Qu.: 139.269   3rd Qu.: 144.414   3rd Qu.: 128.31  
##  Max.   : 568.20   Max.   : 522.892   Max.   : 674.612   Max.   : 580.61  
##       181                182                183                184          
##  Min.   :-387.553   Min.   :-479.822   Min.   :-348.754   Min.   :-369.070  
##  1st Qu.:   1.042   1st Qu.:  -9.248   1st Qu.:   1.906   1st Qu.:   3.761  
##  Median :  69.370   Median :  71.791   Median :  69.888   Median :  70.532  
##  Mean   :  69.878   Mean   :  71.676   Mean   :  70.416   Mean   :  70.628  
##  3rd Qu.: 138.233   3rd Qu.: 152.065   3rd Qu.: 138.577   3rd Qu.: 137.312  
##  Max.   : 476.211   Max.   : 550.406   Max.   : 512.230   Max.   : 528.362  
##       185                186                187               188         
##  Min.   :-357.042   Min.   :-450.936   Min.   :-441.62   Min.   :-390.45  
##  1st Qu.:  -4.231   1st Qu.:   9.477   1st Qu.:  12.57   1st Qu.: -10.75  
##  Median :  71.216   Median :  71.165   Median :  70.39   Median :  69.51  
##  Mean   :  70.567   Mean   :  71.091   Mean   :  70.53   Mean   :  69.50  
##  3rd Qu.: 145.872   3rd Qu.: 132.488   3rd Qu.: 128.69   3rd Qu.: 149.93  
##  Max.   : 533.840   Max.   : 501.872   Max.   : 504.86   Max.   : 569.99  
##       189                190                 191               192           
##  Min.   :-431.133   Min.   :-430.2150   Min.   :-337.26   Min.   :-338.9254  
##  1st Qu.:  -3.212   1st Qu.:   0.9978   1st Qu.:  15.35   1st Qu.:   0.6339  
##  Median :  72.297   Median :  71.7018   Median :  71.02   Median :  70.7497  
##  Mean   :  71.826   Mean   :  71.0475   Mean   :  71.03   Mean   :  70.8856  
##  3rd Qu.: 147.013   3rd Qu.: 141.6495   3rd Qu.: 125.94   3rd Qu.: 140.7080  
##  Max.   : 645.826   Max.   : 485.6389   Max.   : 512.04   Max.   : 477.4773  
##       193                194                195                196         
##  Min.   :-380.459   Min.   :-542.354   Min.   :-430.491   Min.   :-380.21  
##  1st Qu.:   5.539   1st Qu.:   2.629   1st Qu.:   0.986   1st Qu.:  12.45  
##  Median :  70.875   Median :  69.839   Median :  71.057   Median :  71.89  
##  Mean   :  70.774   Mean   :  70.341   Mean   :  70.131   Mean   :  71.81  
##  3rd Qu.: 135.294   3rd Qu.: 137.700   3rd Qu.: 139.730   3rd Qu.: 131.73  
##  Max.   : 494.373   Max.   : 485.507   Max.   : 606.260   Max.   : 683.40  
##       197                198               199                200         
##  Min.   :-398.364   Min.   :-489.07   Min.   :-415.836   Min.   :-415.38  
##  1st Qu.:   6.987   1st Qu.:  10.31   1st Qu.:  -2.763   1st Qu.:  -9.37  
##  Median :  71.745   Median :  70.77   Median :  71.198   Median :  70.98  
##  Mean   :  70.722   Mean   :  70.60   Mean   :  71.530   Mean   :  70.90  
##  3rd Qu.: 134.627   3rd Qu.: 132.13   3rd Qu.: 146.155   3rd Qu.: 151.15  
##  Max.   : 454.461   Max.   : 506.69   Max.   : 550.216   Max.   : 610.31  
##       201               202                203                204         
##  Min.   :-345.56   Min.   :-311.910   Min.   :-349.414   Min.   :-380.86  
##  1st Qu.:  10.30   1st Qu.:   7.866   1st Qu.:   8.586   1st Qu.:  11.16  
##  Median :  70.81   Median :  71.096   Median :  70.635   Median :  71.34  
##  Mean   :  70.50   Mean   :  70.674   Mean   :  70.496   Mean   :  71.12  
##  3rd Qu.: 130.95   3rd Qu.: 134.602   3rd Qu.: 131.363   3rd Qu.: 131.23  
##  Max.   : 521.98   Max.   : 504.878   Max.   : 462.179   Max.   : 516.54  
##       205                206                207                208         
##  Min.   :-312.517   Min.   :-420.042   Min.   :-336.308   Min.   :-333.82  
##  1st Qu.:   8.984   1st Qu.:  -8.871   1st Qu.:   4.162   1st Qu.:  12.41  
##  Median :  71.078   Median :  69.415   Median :  71.359   Median :  70.19  
##  Mean   :  71.254   Mean   :  70.771   Mean   :  70.674   Mean   :  70.53  
##  3rd Qu.: 133.021   3rd Qu.: 149.863   3rd Qu.: 137.851   3rd Qu.: 128.67  
##  Max.   : 586.143   Max.   : 731.592   Max.   : 762.450   Max.   : 502.97  
##       209               210                 211                212          
##  Min.   :-360.04   Min.   :-377.0112   Min.   :-417.937   Min.   :-353.634  
##  1st Qu.:  10.57   1st Qu.:  -0.1787   1st Qu.:  -3.442   1st Qu.:   2.771  
##  Median :  71.31   Median :  72.2645   Median :  71.552   Median :  69.647  
##  Mean   :  71.18   Mean   :  71.6419   Mean   :  71.970   Mean   :  70.133  
##  3rd Qu.: 131.84   3rd Qu.: 142.1273   3rd Qu.: 148.184   3rd Qu.: 137.580  
##  Max.   : 566.88   Max.   : 508.0355   Max.   : 625.948   Max.   : 514.382  
##       213                214                 215                216         
##  Min.   :-318.183   Min.   :-438.9347   Min.   :-514.221   Min.   :-349.46  
##  1st Qu.:   8.408   1st Qu.:   0.2306   1st Qu.:  -3.618   1st Qu.:   5.60  
##  Median :  70.412   Median :  71.5225   Median :  70.246   Median :  71.89  
##  Mean   :  70.665   Mean   :  70.2331   Mean   :  70.693   Mean   :  71.62  
##  3rd Qu.: 132.919   3rd Qu.: 141.6554   3rd Qu.: 145.147   3rd Qu.: 137.16  
##  Max.   : 545.482   Max.   : 502.0180   Max.   : 563.220   Max.   : 682.42  
##       217               218               219               220          
##  Min.   :-441.01   Min.   :-343.90   Min.   :-320.14   Min.   :-390.376  
##  1st Qu.:  11.79   1st Qu.:   5.59   1st Qu.:  14.90   1st Qu.:   8.791  
##  Median :  71.66   Median :  70.84   Median :  70.36   Median :  71.070  
##  Mean   :  71.46   Mean   :  70.21   Mean   :  70.40   Mean   :  70.862  
##  3rd Qu.: 130.61   3rd Qu.: 132.98   3rd Qu.: 125.62   3rd Qu.: 132.334  
##  Max.   : 526.22   Max.   : 572.74   Max.   : 529.50   Max.   : 587.554  
##       221                222               223                224         
##  Min.   :-295.135   Min.   :-429.82   Min.   :-425.352   Min.   :-461.33  
##  1st Qu.:   6.198   1st Qu.:   7.25   1st Qu.:   9.384   1st Qu.: -13.53  
##  Median :  70.171   Median :  71.11   Median :  70.837   Median :  70.91  
##  Mean   :  70.188   Mean   :  70.92   Mean   :  71.104   Mean   :  70.42  
##  3rd Qu.: 132.930   3rd Qu.: 134.12   3rd Qu.: 131.979   3rd Qu.: 153.79  
##  Max.   : 525.361   Max.   : 490.07   Max.   : 659.425   Max.   : 654.88  
##       225                 226                227               228         
##  Min.   :-600.3013   Min.   :-326.666   Min.   :-431.62   Min.   :-589.40  
##  1st Qu.:   0.5192   1st Qu.:   6.921   1st Qu.:  12.38   1st Qu.: -12.18  
##  Median :  72.1853   Median :  70.704   Median :  71.80   Median :  70.53  
##  Mean   :  71.6498   Mean   :  71.151   Mean   :  71.21   Mean   :  70.51  
##  3rd Qu.: 142.5396   3rd Qu.: 134.602   3rd Qu.: 129.15   3rd Qu.: 153.07  
##  Max.   : 601.9791   Max.   : 592.639   Max.   : 527.10   Max.   : 682.73  
##       229               230                231               232          
##  Min.   :-444.10   Min.   :-358.282   Min.   :-363.49   Min.   :-342.939  
##  1st Qu.:  -7.54   1st Qu.:   3.041   1st Qu.:   7.06   1st Qu.:   7.534  
##  Median :  71.31   Median :  70.782   Median :  71.50   Median :  68.941  
##  Mean   :  71.21   Mean   :  70.903   Mean   :  71.23   Mean   :  70.605  
##  3rd Qu.: 149.35   3rd Qu.: 138.480   3rd Qu.: 134.70   3rd Qu.: 133.818  
##  Max.   : 601.24   Max.   : 459.447   Max.   : 483.96   Max.   : 546.559  
##       233               234               235                236         
##  Min.   :-325.42   Min.   :-378.60   Min.   :-388.874   Min.   :-553.34  
##  1st Qu.:  11.34   1st Qu.:  13.63   1st Qu.:   2.623   1st Qu.: -35.45  
##  Median :  70.76   Median :  71.02   Median :  71.271   Median :  69.80  
##  Mean   :  70.98   Mean   :  70.67   Mean   :  70.837   Mean   :  70.90  
##  3rd Qu.: 130.75   3rd Qu.: 128.32   3rd Qu.: 138.888   3rd Qu.: 177.82  
##  Max.   : 425.17   Max.   : 501.52   Max.   : 684.549   Max.   : 680.09  
##       237               238                239               240          
##  Min.   :-480.42   Min.   :-691.550   Min.   :-358.38   Min.   :-311.200  
##  1st Qu.:  11.25   1st Qu.:   3.788   1st Qu.:  15.49   1st Qu.:   8.872  
##  Median :  70.94   Median :  70.317   Median :  70.82   Median :  70.994  
##  Mean   :  71.28   Mean   :  70.278   Mean   :  71.45   Mean   :  71.142  
##  3rd Qu.: 131.42   3rd Qu.: 136.586   3rd Qu.: 127.51   3rd Qu.: 134.622  
##  Max.   : 611.30   Max.   : 600.362   Max.   : 538.28   Max.   : 531.073  
##       241               242                243               244          
##  Min.   :-515.76   Min.   :-339.104   Min.   :-368.27   Min.   :-401.390  
##  1st Qu.: -12.39   1st Qu.:   6.184   1st Qu.:  13.84   1st Qu.:  -2.155  
##  Median :  70.23   Median :  71.093   Median :  71.66   Median :  70.433  
##  Mean   :  70.88   Mean   :  70.911   Mean   :  71.59   Mean   :  71.209  
##  3rd Qu.: 153.91   3rd Qu.: 135.950   3rd Qu.: 130.12   3rd Qu.: 144.105  
##  Max.   : 619.02   Max.   : 579.990   Max.   : 473.76   Max.   : 510.736  
##       245               246               247                248         
##  Min.   :-307.45   Min.   :-366.48   Min.   :-390.630   Min.   :-333.72  
##  1st Qu.:  12.36   1st Qu.:  14.47   1st Qu.:   7.859   1st Qu.:  10.06  
##  Median :  71.61   Median :  70.63   Median :  71.872   Median :  71.19  
##  Mean   :  71.64   Mean   :  70.66   Mean   :  70.670   Mean   :  71.80  
##  3rd Qu.: 131.54   3rd Qu.: 126.56   3rd Qu.: 133.734   3rd Qu.: 134.58  
##  Max.   : 522.32   Max.   : 601.53   Max.   : 445.189   Max.   : 469.54  
##       249                250                251               252         
##  Min.   :-472.951   Min.   :-425.085   Min.   :-347.91   Min.   :-373.68  
##  1st Qu.:   6.935   1st Qu.:   4.399   1st Qu.:  11.87   1st Qu.:  10.04  
##  Median :  71.575   Median :  71.226   Median :  71.14   Median :  70.92  
##  Mean   :  71.310   Mean   :  71.161   Mean   :  70.75   Mean   :  71.05  
##  3rd Qu.: 135.068   3rd Qu.: 138.021   3rd Qu.: 129.54   3rd Qu.: 132.33  
##  Max.   : 486.305   Max.   : 460.757   Max.   : 454.04   Max.   : 465.51  
##       253                254                255                256          
##  Min.   :-423.881   Min.   :-395.948   Min.   :-386.174   Min.   :-497.783  
##  1st Qu.:   8.186   1st Qu.:   9.033   1st Qu.:   2.082   1st Qu.:  -3.372  
##  Median :  70.091   Median :  70.190   Median :  69.719   Median :  72.387  
##  Mean   :  70.607   Mean   :  70.216   Mean   :  70.422   Mean   :  72.121  
##  3rd Qu.: 133.012   3rd Qu.: 131.101   3rd Qu.: 138.810   3rd Qu.: 147.858  
##  Max.   : 568.729   Max.   : 522.977   Max.   : 502.187   Max.   : 621.146  
##       257                258                259                260          
##  Min.   :-605.819   Min.   :-382.822   Min.   :-456.737   Min.   :-368.745  
##  1st Qu.:   2.155   1st Qu.:   7.375   1st Qu.:   9.313   1st Qu.:   9.759  
##  Median :  69.699   Median :  71.252   Median :  72.507   Median :  71.824  
##  Mean   :  70.235   Mean   :  70.706   Mean   :  71.858   Mean   :  70.984  
##  3rd Qu.: 137.394   3rd Qu.: 134.844   3rd Qu.: 134.821   3rd Qu.: 131.864  
##  Max.   : 534.539   Max.   : 496.518   Max.   : 665.788   Max.   : 462.208  
##       261                262                263               264         
##  Min.   :-390.215   Min.   :-476.163   Min.   :-384.22   Min.   :-391.73  
##  1st Qu.:   6.988   1st Qu.:   5.662   1st Qu.:  12.55   1st Qu.:  13.93  
##  Median :  71.614   Median :  71.070   Median :  70.99   Median :  70.41  
##  Mean   :  71.166   Mean   :  70.572   Mean   :  71.25   Mean   :  71.05  
##  3rd Qu.: 136.552   3rd Qu.: 135.371   3rd Qu.: 130.56   3rd Qu.: 128.53  
##  Max.   : 514.509   Max.   : 500.641   Max.   : 489.06   Max.   : 480.40  
##       265                266               267                268         
##  Min.   :-407.087   Min.   :-442.34   Min.   :-376.880   Min.   :-370.37  
##  1st Qu.:   9.611   1st Qu.:  -5.39   1st Qu.:   8.153   1st Qu.:  10.21  
##  Median :  70.245   Median :  70.59   Median :  70.230   Median :  71.05  
##  Mean   :  70.525   Mean   :  70.88   Mean   :  71.123   Mean   :  71.47  
##  3rd Qu.: 132.181   3rd Qu.: 146.80   3rd Qu.: 134.265   3rd Qu.: 133.23  
##  Max.   : 479.642   Max.   : 554.61   Max.   : 714.777   Max.   : 643.83  
##       269               270          
##  Min.   :-418.18   Min.   :-371.681  
##  1st Qu.:  13.37   1st Qu.:  -3.512  
##  Median :  69.96   Median :  71.508  
##  Mean   :  70.52   Mean   :  71.410  
##  3rd Qu.: 127.87   3rd Qu.: 146.662  
##  Max.   : 454.41   Max.   : 598.613
pp_check(model_1)

The pp check function gives the predicted values plots by comparing the observed outcome variable y to the simulated values y rep .In the above graph the simulated values are spread across the graph while the observed value is mostly skewed in the value of 190.

tidy(model_1,conf.int = TRUE,conf.level=0.9999)
  1. On the basis of the results, identify a subset of predictors that can help you produce a more concise summary of the data. Then, fit a model with this reduced set of predictors by repeating the steps 1 - 3 above with this reduced subset of predictors.

    I used tidy function to identify the subset of predictors that can help to produce the most accurate predictors that are statistically significant with the MAX.HR. As the table suggests that most of the predictors are near to the value zero the null hypothesis can be rejected but there are couple of them that need be considered those are the FBS.over.120,sex ,chest pain type ,ST depression and number of vessels fluro are significant predictor variables.

dataset%>%
  select(FBS.over.120,Chest.pain.type,Sex,Max.HR)%>%
  ggpairs()

GGpairs helps to find the correlation between the contents present in the dataset1.In the above graph Max.HR is negatively correlated with most of the predictor variables while with the exception with the blood sugar level.

Model-2(Reduced model-1)

I am using ,FBS.over.120,cholesterol and gender as predictors and all the predictor variables are not statistically significant with each other .I am going to set up the heart beat range as the prior_intercept while the prior normal as the week prior normal .Hence it is set as auto scale.

model_2 <- stan_glm(
  Max.HR~Age+BP+FBS.over.120+Chest.pain.type+Sex, 
  data = dataset1, family = gaussian,
  prior_intercept = normal(71,65.5),
  prior = normal(0, 1, autoscale = TRUE), 
  prior_aux = exponential(1, autoscale = TRUE),
  chains = 4, iter = 5000*2, seed = 84735,prior_PD = TRUE)
## 
## SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 1.8e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.18 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 1: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 1: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 1: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 1: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 1: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 1: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 1: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 1: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 1: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 1: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 1: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.121958 seconds (Warm-up)
## Chain 1:                0.19579 seconds (Sampling)
## Chain 1:                0.317748 seconds (Total)
## Chain 1: 
## 
## SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2).
## Chain 2: 
## Chain 2: Gradient evaluation took 1.3e-05 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.13 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2: 
## Chain 2: 
## Chain 2: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 2: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 2: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 2: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 2: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 2: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 2: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 2: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 2: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 2: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 2: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 2: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 2: 
## Chain 2:  Elapsed Time: 0.116275 seconds (Warm-up)
## Chain 2:                0.201905 seconds (Sampling)
## Chain 2:                0.31818 seconds (Total)
## Chain 2: 
## 
## SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3).
## Chain 3: 
## Chain 3: Gradient evaluation took 1.3e-05 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.13 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3: 
## Chain 3: 
## Chain 3: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 3: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 3: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 3: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 3: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 3: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 3: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 3: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 3: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 3: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 3: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 3: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 3: 
## Chain 3:  Elapsed Time: 0.112571 seconds (Warm-up)
## Chain 3:                0.110856 seconds (Sampling)
## Chain 3:                0.223427 seconds (Total)
## Chain 3: 
## 
## SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4).
## Chain 4: 
## Chain 4: Gradient evaluation took 7e-06 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.07 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4: 
## Chain 4: 
## Chain 4: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 4: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 4: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 4: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 4: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 4: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 4: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 4: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 4: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 4: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 4: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 4: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 4: 
## Chain 4:  Elapsed Time: 0.115927 seconds (Warm-up)
## Chain 4:                0.13055 seconds (Sampling)
## Chain 4:                0.246477 seconds (Total)
## Chain 4:
summary(model_2)
## 
## Model Info:
##  function:     stan_glm
##  family:       gaussian [identity]
##  formula:      Max.HR ~ Age + BP + FBS.over.120 + Chest.pain.type + Sex
##  algorithm:    sampling
##  sample:       20000 (posterior sample size)
##  priors:       see help('prior_summary')
##  observations: 270
##  predictors:   6
## 
## Estimates:
##                   mean   sd     10%    50%    90% 
## (Intercept)       73.2  244.6 -239.4   74.1  383.7
## Age                0.0    2.6   -3.3    0.0    3.3
## BP                 0.0    1.3   -1.7    0.0    1.6
## FBS.over.120      -0.4   65.1  -83.9   -0.4   83.7
## Chest.pain.type    0.2   24.5  -31.3    0.3   31.4
## Sex               -0.3   49.3  -63.5   -0.1   63.0
## sigma             23.1   23.2    2.4   16.0   54.1
## 
## MCMC diagnostics
##                 mcse Rhat n_eff
## (Intercept)     1.5  1.0  27451
## Age             0.0  1.0  26700
## BP              0.0  1.0  26469
## FBS.over.120    0.4  1.0  28593
## Chest.pain.type 0.1  1.0  27077
## Sex             0.3  1.0  28108
## sigma           0.1  1.0  28130
## log-posterior   0.0  1.0   8083
## 
## For each parameter, mcse is Monte Carlo standard error, n_eff is a crude measure of effective sample size, and Rhat is the potential scale reduction factor on split chains (at convergence Rhat=1).

The summary of the function determines that the prior intercept is with the mean of 73.2 and the standard deviation 244.6, with the value of 383.7 in 90% confidence interval.The rhat value is 1 for all the predictor variables.Additionally we can find that FBS.over.120 and sex is positively correlated with the heart rate.

mcmc_trace(model_2, size = 0.1)

mcmc_dens_overlay(model_2)

mcmc_acf(model_2)

neff_ratio(model_2)
##     (Intercept)             Age              BP    FBS.over.120 Chest.pain.type 
##         1.37255         1.33500         1.32345         1.42965         1.35385 
##             Sex           sigma 
##         1.40540         1.40650
rhat(model_2)
##     (Intercept)             Age              BP    FBS.over.120 Chest.pain.type 
##       0.9999251       0.9999163       0.9999555       0.9998619       0.9999468 
##             Sex           sigma 
##       0.9998806       0.9998437

The mcmc_trance function is used to denote the chains .The chains produced for the above model appers to be mixed well together .Hence they are considered as the stable.

The mcmc overlay function represents the probability density function of each parameter overlap with each other well .In the above graph papers that like sufficiently overlapped with each other.

prior_summary(model_2)
## Priors for model 'model_2' 
## ------
## Intercept (after predictors centered)
##  ~ normal(location = 71, scale = 66)
## 
## Coefficients
##   Specified prior:
##     ~ normal(location = [0,0,0,...], scale = [1,1,1,...])
##   Adjusted prior:
##     ~ normal(location = [0,0,0,...], scale = [ 2.54, 1.30,65.09,...])
## 
## Auxiliary (sigma)
##   Specified prior:
##     ~ exponential(rate = 1)
##   Adjusted prior:
##     ~ exponential(rate = 0.043)
## ------
## See help('prior_summary.stanreg') for more details

The values we assumed are sufficiently identical to the values produced in the prior summary .Hence the values we considered are mostly correct.

tidy(model_2,conf.int = TRUE,conf.level=0.9999)

As the table suggests that most of the predictors are near to the value zero the null hypothesis can be rejected but there are couple of them that need be considered those are the FBS.over.120,sex and chest pain type are significant predictor variables and age,BP is mostly equal to zero.Hence I am dropping them for the next model.Since there is no strong relationship between the BP and Max.HR we can reject the null hypothesis.

pp_check(model_2)

The pp check function gives the predicted values plots by comparing the observed outcome variable y to the simulated values y rep .In the above graph the simulated values are spread across the graph while the observed value is mostly skewed in the value of 190.Additionally we can observe that the simulated values are more skewed than the outcome variables.

Model-3(Reduced model-2)

I am using,FBS.over.120,cholesterol,gender,age,BP,chest.pain.type,number.of.vessels as predictors and considering all the predictor variables are not statistically significant with each other .I am going to set up the heart beat range as the prior_intercept while the prior normal as the week prior normal .Hence it is set as auto scale.

dataset6=dataset%>%select(FBS.over.120,Chest.pain.type,Max.HR)
ggpairs(dataset6)

The explanatory analysis is performed by using the ggpairs function ,where in which the correlation of the heart beat is negative with chest pain type and positive with FBS.over 120.

model_3 <- stan_glm(
  Max.HR~FBS.over.120+Chest.pain.type, 
  data = dataset1, family = gaussian,
  prior_intercept = normal(71,65.5),
  prior = normal(0, 1, autoscale = TRUE), 
  prior_aux = exponential(1, autoscale = TRUE),
  chains = 4, iter = 5000*2, seed = 84735,prior_PD = TRUE)
## 
## SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 1.7e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.17 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 1: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 1: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 1: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 1: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 1: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 1: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 1: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 1: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 1: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 1: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 1: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.103872 seconds (Warm-up)
## Chain 1:                0.100682 seconds (Sampling)
## Chain 1:                0.204554 seconds (Total)
## Chain 1: 
## 
## SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2).
## Chain 2: 
## Chain 2: Gradient evaluation took 1e-05 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.1 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2: 
## Chain 2: 
## Chain 2: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 2: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 2: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 2: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 2: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 2: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 2: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 2: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 2: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 2: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 2: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 2: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 2: 
## Chain 2:  Elapsed Time: 0.108582 seconds (Warm-up)
## Chain 2:                0.142851 seconds (Sampling)
## Chain 2:                0.251433 seconds (Total)
## Chain 2: 
## 
## SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3).
## Chain 3: 
## Chain 3: Gradient evaluation took 6e-06 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.06 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3: 
## Chain 3: 
## Chain 3: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 3: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 3: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 3: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 3: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 3: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 3: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 3: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 3: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 3: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 3: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 3: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 3: 
## Chain 3:  Elapsed Time: 0.110651 seconds (Warm-up)
## Chain 3:                0.103647 seconds (Sampling)
## Chain 3:                0.214298 seconds (Total)
## Chain 3: 
## 
## SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4).
## Chain 4: 
## Chain 4: Gradient evaluation took 6e-06 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.06 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4: 
## Chain 4: 
## Chain 4: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 4: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 4: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 4: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 4: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 4: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 4: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 4: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 4: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 4: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 4: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 4: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 4: 
## Chain 4:  Elapsed Time: 0.110096 seconds (Warm-up)
## Chain 4:                0.147218 seconds (Sampling)
## Chain 4:                0.257314 seconds (Total)
## Chain 4:
summary(model_3)
## 
## Model Info:
##  function:     stan_glm
##  family:       gaussian [identity]
##  formula:      Max.HR ~ FBS.over.120 + Chest.pain.type
##  algorithm:    sampling
##  sample:       20000 (posterior sample size)
##  priors:       see help('prior_summary')
##  observations: 270
##  predictors:   3
## 
## Estimates:
##                   mean   sd    10%   50%   90%
## (Intercept)      71.7  102.3 -59.6  71.8 201.3
## FBS.over.120     -0.2   65.0 -84.2   0.5  83.1
## Chest.pain.type  -0.1   24.4 -31.2  -0.1  31.3
## sigma            23.0   22.9   2.4  15.8  52.9
## 
## MCMC diagnostics
##                 mcse Rhat n_eff
## (Intercept)     0.8  1.0  16901
## FBS.over.120    0.5  1.0  19042
## Chest.pain.type 0.2  1.0  15501
## sigma           0.2  1.0  21772
## log-posterior   0.0  1.0   8860
## 
## For each parameter, mcse is Monte Carlo standard error, n_eff is a crude measure of effective sample size, and Rhat is the potential scale reduction factor on split chains (at convergence Rhat=1).

The summary of the function determines that the prior intercept is with the mean of 71.7 and the standard deviation 102.3, with the value of 201.3 in 90% confidence interval.The rhat value is 1 for all the predictor variables.Additionally we can find that FBS.over.120,sex,chest pain type and number of vessels of fluro are positively correlated with the heart rate.

mcmc_trace(model_3, size = 0.1)

mcmc_dens_overlay(model_3)

mcmc_acf(model_3)

neff_ratio(model_3)
##     (Intercept)    FBS.over.120 Chest.pain.type           sigma 
##         0.84505         0.95210         0.77505         1.08860
rhat(model_3)
##     (Intercept)    FBS.over.120 Chest.pain.type           sigma 
##       0.9999069       1.0001365       0.9998953       1.0000749

The mcmc_trance function is used to denote the chains .The chains produced for the above model appears to be mixed well together .Hence they are considered as the stable.

The mcmc overlay function represents the probability density function of each parameter overlap with each other well .In the above graph papers that like sufficiently overlapped with each other.

pp_check(model_3)

The pp check function gives the predicted values plots by comparing the observed outcome variable y to the simulated values y rep .In the above graph the simulated values are spread across the graph while the observed value is mostly skewed on the value around 200 .

prior_summary(model_3)
## Priors for model 'model_3' 
## ------
## Intercept (after predictors centered)
##  ~ normal(location = 71, scale = 66)
## 
## Coefficients
##   Specified prior:
##     ~ normal(location = [0,0], scale = [1,1])
##   Adjusted prior:
##     ~ normal(location = [0,0], scale = [65.09,24.38])
## 
## Auxiliary (sigma)
##   Specified prior:
##     ~ exponential(rate = 1)
##   Adjusted prior:
##     ~ exponential(rate = 0.043)
## ------
## See help('prior_summary.stanreg') for more details

The values that we considered are mostly identical with the observed summary value .

tidy(model_3,conf.int = TRUE,conf.level=0.9999)

As the table suggests the two predictor values filtered from the 2nd model is showing the significant relationship with the heart beat .

Model-4(Filtered model)

I am using Chest pain type,FBS over 120,slope of ST and number of vessels of fluro as predictors and considering all the predictor variables are not statistically significant with each other .I am going to set up the heart beat range as the prior_intercept while the prior normal as the week prior normal .Hence it is set as auto scale.

dataset3=dataset%>%select(Chest.pain.type,FBS.over.120,Slope.of.ST,Number.of.vessels.fluro,Max.HR)
model_4 <- stan_glm(
  Max.HR~Chest.pain.type+FBS.over.120+Slope.of.ST+Number.of.vessels.fluro, 
  data = dataset3, family = gaussian,
  prior_intercept = normal(71,65.5),
  prior = normal(0, 1, autoscale = TRUE), 
  prior_aux = exponential(1, autoscale = TRUE),
  chains = 4, iter = 5000*2, seed = 84735,prior_PD = TRUE)
## 
## SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 2.2e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.22 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 1: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 1: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 1: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 1: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 1: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 1: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 1: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 1: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 1: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 1: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 1: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.112988 seconds (Warm-up)
## Chain 1:                0.148789 seconds (Sampling)
## Chain 1:                0.261777 seconds (Total)
## Chain 1: 
## 
## SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2).
## Chain 2: 
## Chain 2: Gradient evaluation took 7e-06 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.07 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2: 
## Chain 2: 
## Chain 2: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 2: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 2: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 2: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 2: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 2: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 2: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 2: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 2: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 2: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 2: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 2: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 2: 
## Chain 2:  Elapsed Time: 0.117878 seconds (Warm-up)
## Chain 2:                0.139102 seconds (Sampling)
## Chain 2:                0.25698 seconds (Total)
## Chain 2: 
## 
## SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3).
## Chain 3: 
## Chain 3: Gradient evaluation took 1e-05 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.1 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3: 
## Chain 3: 
## Chain 3: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 3: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 3: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 3: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 3: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 3: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 3: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 3: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 3: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 3: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 3: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 3: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 3: 
## Chain 3:  Elapsed Time: 0.116362 seconds (Warm-up)
## Chain 3:                0.109786 seconds (Sampling)
## Chain 3:                0.226148 seconds (Total)
## Chain 3: 
## 
## SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4).
## Chain 4: 
## Chain 4: Gradient evaluation took 7e-06 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.07 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4: 
## Chain 4: 
## Chain 4: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 4: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 4: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 4: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 4: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 4: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 4: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 4: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 4: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 4: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 4: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 4: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 4: 
## Chain 4:  Elapsed Time: 0.11778 seconds (Warm-up)
## Chain 4:                0.140587 seconds (Sampling)
## Chain 4:                0.258367 seconds (Total)
## Chain 4:
summary(model_4)
## 
## Model Info:
##  function:     stan_glm
##  family:       gaussian [identity]
##  formula:      Max.HR ~ Chest.pain.type + FBS.over.120 + Slope.of.ST + Number.of.vessels.fluro
##  algorithm:    sampling
##  sample:       20000 (posterior sample size)
##  priors:       see help('prior_summary')
##  observations: 270
##  predictors:   5
## 
## Estimates:
##                           mean   sd    10%   50%   90%
## (Intercept)              71.6  119.1 -81.6  71.7 224.6
## Chest.pain.type           0.1   24.2 -31.3   0.2  31.3
## FBS.over.120             -0.1   65.9 -84.5  -0.5  84.7
## Slope.of.ST              -0.2   37.6 -48.7  -0.3  47.5
## Number.of.vessels.fluro   0.1   24.6 -31.3   0.2  31.5
## sigma                    23.1   23.1   2.5  16.0  53.1
## 
## MCMC diagnostics
##                         mcse Rhat n_eff
## (Intercept)             0.8  1.0  21486
## Chest.pain.type         0.2  1.0  22535
## FBS.over.120            0.4  1.0  21497
## Slope.of.ST             0.3  1.0  22650
## Number.of.vessels.fluro 0.2  1.0  20953
## sigma                   0.1  1.0  24269
## log-posterior           0.0  1.0   8934
## 
## For each parameter, mcse is Monte Carlo standard error, n_eff is a crude measure of effective sample size, and Rhat is the potential scale reduction factor on split chains (at convergence Rhat=1).
mcmc_trace(model_4, size = 0.1)

mcmc_dens_overlay(model_4)

mcmc_acf(model_4)

neff_ratio(model_4)
##             (Intercept)         Chest.pain.type            FBS.over.120 
##                 1.07430                 1.12675                 1.07485 
##             Slope.of.ST Number.of.vessels.fluro                   sigma 
##                 1.13250                 1.04765                 1.21345
rhat(model_4)
##             (Intercept)         Chest.pain.type            FBS.over.120 
##               1.0001173               0.9998513               0.9999540 
##             Slope.of.ST Number.of.vessels.fluro                   sigma 
##               0.9999339               0.9999719               1.0001102

The mcmc_trance function is used to denote the chains .The chains produced for the above model appears to be mixed well together .Hence they are considered as the stable.

The mcmc overlay function represents the probability density function of each parameter overlap with each other well .In the above graph papers that like sufficiently overlapped with each other.

posterior_interval(model_4, prob = 0.90)
##                                  5%       95%
## (Intercept)             -123.620426 268.75015
## Chest.pain.type          -39.340134  39.69641
## FBS.over.120            -107.124480 108.82824
## Slope.of.ST              -62.329850  62.20978
## Number.of.vessels.fluro  -40.240539  40.13323
## sigma                      1.198026  69.63659
pp_check(model_4)

The pp check function gives the predicted values plots by comparing the observed outcome variable y to the simulated values y rep .In the above graph the simulated values are spread across the graph while the observed value is mostly skewed in the value around 190 .

tidy(model_4,conf.int = TRUE,conf.level=0.9999)

As the table suggests most of the predictors are statistically significantly correlated with the heart beat .Additionally confidence interval is significantly high towards this predictors.

Compare the full and reduced models using appropriate measures of comparison. Which model would you prefer? Explain by drawing on appropriate evidence.

Lets check the posterior predictor check across all the four models

pp_check(model_1)

pp_check(model_2)

pp_check(model_3)

pp_check(model_4)

The model_1 that is the full model and the model_4 looks almost similar so i am checking by numerical comparison between the both models.

set.seed(2022) 
predictions <- posterior_predict(model_1, newdata = dataset)
dim(predictions)
## [1] 20000   270
set.seed(2022) 
predictions_r1 <- posterior_predict(model_2, newdata = dataset)
dim(predictions)
## [1] 20000   270
set.seed(2022) 
predictions_r3 <- posterior_predict(model_3, newdata = dataset)
dim(predictions)
## [1] 20000   270
set.seed(2022) 
predictions_r4 <- posterior_predict(model_4, newdata = dataset)
dim(predictions)
## [1] 20000   270
ppc_intervals(dataset$Max.HR,
              yrep = predictions, 
              prob = 0.5, 
              prob_outer = 0.95)

ppc_intervals(dataset$Max.HR,
              yrep = predictions_r4, 
              prob = 0.5, 
              prob_outer = 0.95)

set.seed(84735)
cv_procedure <- prediction_summary_cv(
  model = model_1, data = dataset, k = 10)
cv_procedure$folds
cv_procedure$cv
set.seed(84735)
cv_procedure <- prediction_summary_cv(
  model = model_2, data = dataset, k = 10)
cv_procedure$folds
cv_procedure$cv
set.seed(84735)
cv_procedure <- prediction_summary_cv(
  model = model_3, data = dataset, k = 10)
cv_procedure$folds
cv_procedure$cv
set.seed(84735)
cv_procedure <- prediction_summary_cv(
  model = model_4, data = dataset, k = 10)
cv_procedure$folds
cv_procedure$cv
set.seed(84735)
loo_1 <- loo(model_1)
## Warning: Found 270 observations with a pareto_k > 0.7. With this many problematic observations we recommend calling 'kfold' with argument 'K=10' to perform 10-fold cross-validation rather than LOO.
loo_2 <- loo(model_2)
## Warning: Found 270 observations with a pareto_k > 0.7. With this many problematic observations we recommend calling 'kfold' with argument 'K=10' to perform 10-fold cross-validation rather than LOO.
loo_3 <- loo(model_3)
## Warning: Found 270 observations with a pareto_k > 0.7. With this many problematic observations we recommend calling 'kfold' with argument 'K=10' to perform 10-fold cross-validation rather than LOO.
loo_4 <- loo(model_4)
## Warning: Found 270 observations with a pareto_k > 0.7. With this many problematic observations we recommend calling 'kfold' with argument 'K=10' to perform 10-fold cross-validation rather than LOO.
loo_compare(loo_1, loo_2, loo_3, loo_4)
##         elpd_diff     se_diff      
## model_3  0.000000e+00  0.000000e+00
## model_1 -4.788522e+11  1.842865e+10
## model_4 -1.582883e+12  7.272224e+10
## model_2 -1.770927e+12  3.352085e+10
set.seed(84735)
prediction_summary(model = model_1, data = dataset)
set.seed(84735)
prediction_summary(model = model_2, data = dataset)
set.seed(84735)
prediction_summary(model = model_3, data = dataset)
set.seed(84735)
prediction_summary(model = model_4, data = dataset)

The model 1 i.e the full model is aggregated with the MAE of 31.1 percent .While the model 2 i.e the desired predicted values only accounted 18.1 percent of the MAE .The reduced third model accounted for 15% MAE and the filtered values present in the fourth model accounted for 20.3% MAE which is highest among all of the models leaving behind the full model .

Conclusions

What are the most number of heart related diseases can be predicted in the human body can be predicted by analyzing the human heart rate?

As I mentioned earlier human heart rate solely cannot cannot be accountable for predicting the heart complications .But by analyzing the heart rate there will be some useful insight that can be obtained.According to my data set that I considered .There are only limited parameter values that can be analysed by real time human heart beat analysis .The predictor values that can be analysed and having the most statistical significance for the above solution are (Chest.pain.type+FBS.over.120+Slope.of.ST+Number.of.vessels.fluro)

Identify appropriate limitations of your analyses.

There are many limitations of my analysis as my project is related with the medical field .There are several unusual circumstances that can been experienced in the human body .So, the data needs to be transformed based on the analysis with taking the inputs from the doctors and professionals related to medical field in order to suffice the erroneous decisions.The mixed method approach can solve the above problem .It is the combination of qualitative and quantitative data ,where the quantitative data is provided by the health trackers and the qualitative data provided by the medical professionals (or)doctors.

Identify two additional research questions that you can answer by doing a follow-up study that builds on the current study. Focus specifically on whether the same variables suffice or if you additional variables need to be included. What approach for collecting data will be needed.

The dataset i considered is only confined to the small group of people with limited resources .My performed analysis can be true for only some circumstances.Since the medical sciences and techniques involved in measuring the parameters keep on updating in the daily basis .

When coming to transforming my analysis into a larger picture there need more parameters recorded based on the geographical location etc. for accurate analysis. (Hayward, R. A., & Hofer, T. P. (2001))There are many different case studies cited in this paper where it deals with the lack of accurate information is leading reason for the medical errors, and the third most cause of death is the medical errors.

References:

Hayward, R. A., & Hofer, T. P. (2001). Estimating hospital deaths due to medical errors: preventability is in the eye of the reviewer. Jama, 286(4), 415-420.

Reflections

What challenges did you face when conceptualizing your project’s idea in terms of choice of dataset, choice of research questions, and choice of variables?

There are several difficult challenges I addressed in channelizing the idea into the problem statement. The choice of the dataset was extremely hard to fetch with the required parameters but I managed to find one in UCI machinery, which contains a different number of parameters that are unwanted in my research. The variables I filtered out for my analysis are one of the most common problems that the majority of heart-related patients are facing.

The transitioning of the problem statement into research questions was a time-consuming task for me in all of the analyses. The choice of the research questions I formulated by reading this research paper on problems faced and the diagnosis of the problems experienced can be done by measuring the simple parameters(Gunčar, G., Kukar, M., Notar,M., Brvar, M., Černelč, P., Notar, M., & Notar, M. (2018)).

In what specific ways did you address these challenges?

The choice of the research questions I formulated by reading this research paper which mentions, the most common problems faced and the diagnosis of the problems experienced can be done through measuring the simple parameters(Gunčar, G., Kukar, M., Notar, M., Brvar, M., Černelč, P., Notar, M., & Notar, M. (2018)).

Reference:

Gunčar, G., Kukar, M., Notar, M., Brvar, M., Černelč, P., Notar, M., & Notar, M. (2018). An application of machine learning to haematological diagnosis. Scientific reports, 8(1), 1-12.

What did you learn about the Bayesian approach to analyzing data, while working on the project and on the assignments in previous weeks in the course? Identify two-three keys insights you have had as part of your learning process.

There are many insights that I learned in this course which I going to take forward with me for performing analysis. The first insight is creating different kinds of models and measuring their probability. The second insight that was interesting to me is a specification of priors using the Stan_glm function.